00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2027 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.008 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.010 The recommended git tool is: git 00:00:00.010 using credential 00000000-0000-0000-0000-000000000002 00:00:00.012 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.062 Using shallow fetch with depth 1 00:00:00.062 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.062 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.123 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.123 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:04.823 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.839 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.856 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:04.856 > git config core.sparsecheckout # timeout=10 00:00:04.869 > git read-tree -mu HEAD # timeout=10 00:00:04.886 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:04.906 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:04.907 > git rev-list --no-walk e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=10 00:00:05.061 [Pipeline] Start of Pipeline 00:00:05.076 [Pipeline] library 00:00:05.078 Loading library shm_lib@master 00:00:05.078 Library shm_lib@master is cached. Copying from home. 00:00:05.096 [Pipeline] node 00:00:20.098 Still waiting to schedule task 00:00:20.098 Waiting for next available executor on ‘vagrant-vm-host’ 00:18:43.869 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu24-vg-autotest_2 00:18:43.872 [Pipeline] { 00:18:43.884 [Pipeline] catchError 00:18:43.886 [Pipeline] { 00:18:43.899 [Pipeline] wrap 00:18:43.908 [Pipeline] { 00:18:43.919 [Pipeline] stage 00:18:43.922 [Pipeline] { (Prologue) 00:18:43.950 [Pipeline] echo 00:18:43.952 Node: VM-host-SM0 00:18:43.960 [Pipeline] cleanWs 00:18:43.972 [WS-CLEANUP] Deleting project workspace... 00:18:43.972 [WS-CLEANUP] Deferred wipeout is used... 00:18:43.979 [WS-CLEANUP] done 00:18:44.293 [Pipeline] setCustomBuildProperty 00:18:44.412 [Pipeline] httpRequest 00:18:44.441 [Pipeline] echo 00:18:44.443 Sorcerer 10.211.164.101 is alive 00:18:44.453 [Pipeline] httpRequest 00:18:44.458 HttpMethod: GET 00:18:44.459 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:18:44.460 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:18:44.462 Response Code: HTTP/1.1 200 OK 00:18:44.462 Success: Status code 200 is in the accepted range: 200,404 00:18:44.463 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest_2/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:18:44.611 [Pipeline] sh 00:18:44.893 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:18:44.916 [Pipeline] httpRequest 00:18:44.938 [Pipeline] echo 00:18:44.941 Sorcerer 10.211.164.101 is alive 00:18:44.955 [Pipeline] httpRequest 00:18:44.961 HttpMethod: GET 00:18:44.962 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:18:44.962 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:18:44.965 Response Code: HTTP/1.1 200 OK 00:18:44.966 Success: Status code 200 is in the accepted range: 200,404 00:18:44.966 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest_2/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:18:47.137 [Pipeline] sh 00:18:47.418 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:18:50.714 [Pipeline] sh 00:18:50.995 + git -C spdk log --oneline -n5 00:18:50.995 dbef7efac test: fix dpdk builds on ubuntu24 00:18:50.995 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:18:50.995 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:18:50.995 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:18:50.995 e03c164a1 nvme: add nvme_ctrlr_lock 00:18:51.018 [Pipeline] writeFile 00:18:51.035 [Pipeline] sh 00:18:51.314 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:18:51.326 [Pipeline] sh 00:18:51.605 + cat autorun-spdk.conf 00:18:51.605 SPDK_TEST_UNITTEST=1 00:18:51.605 SPDK_RUN_FUNCTIONAL_TEST=1 00:18:51.605 SPDK_TEST_BLOCKDEV=1 00:18:51.605 SPDK_RUN_ASAN=1 00:18:51.605 SPDK_RUN_UBSAN=1 00:18:51.605 SPDK_TEST_RAID5=1 00:18:51.605 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:51.611 RUN_NIGHTLY=1 00:18:51.614 [Pipeline] } 00:18:51.633 [Pipeline] // stage 00:18:51.652 [Pipeline] stage 00:18:51.654 [Pipeline] { (Run VM) 00:18:51.670 [Pipeline] sh 00:18:51.948 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:18:51.948 + echo 'Start stage prepare_nvme.sh' 00:18:51.948 Start stage prepare_nvme.sh 00:18:51.948 + [[ -n 0 ]] 00:18:51.948 + disk_prefix=ex0 00:18:51.948 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest_2 ]] 00:18:51.948 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest_2/autorun-spdk.conf ]] 00:18:51.948 + source /var/jenkins/workspace/ubuntu24-vg-autotest_2/autorun-spdk.conf 00:18:51.948 ++ SPDK_TEST_UNITTEST=1 00:18:51.948 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:18:51.948 ++ SPDK_TEST_BLOCKDEV=1 00:18:51.948 ++ SPDK_RUN_ASAN=1 00:18:51.948 ++ SPDK_RUN_UBSAN=1 00:18:51.948 ++ SPDK_TEST_RAID5=1 00:18:51.948 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:18:51.948 ++ RUN_NIGHTLY=1 00:18:51.948 + cd /var/jenkins/workspace/ubuntu24-vg-autotest_2 00:18:51.948 + nvme_files=() 00:18:51.948 + declare -A nvme_files 00:18:51.948 + backend_dir=/var/lib/libvirt/images/backends 00:18:51.948 + nvme_files['nvme.img']=5G 00:18:51.948 + nvme_files['nvme-cmb.img']=5G 00:18:51.948 + nvme_files['nvme-multi0.img']=4G 00:18:51.948 + nvme_files['nvme-multi1.img']=4G 00:18:51.948 + nvme_files['nvme-multi2.img']=4G 00:18:51.948 + nvme_files['nvme-openstack.img']=8G 00:18:51.948 + nvme_files['nvme-zns.img']=5G 00:18:51.948 + (( SPDK_TEST_NVME_PMR == 1 )) 00:18:51.948 + (( SPDK_TEST_FTL == 1 )) 00:18:51.948 + (( SPDK_TEST_NVME_FDP == 1 )) 00:18:51.948 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:18:51.948 + for nvme in "${!nvme_files[@]}" 00:18:51.948 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:18:51.948 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:18:51.948 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:18:52.206 + echo 'End stage prepare_nvme.sh' 00:18:52.206 End stage prepare_nvme.sh 00:18:52.220 [Pipeline] sh 00:18:52.529 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:18:52.529 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -H -a -v -f ubuntu2404 00:18:52.529 00:18:52.529 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest_2/spdk/scripts/vagrant 00:18:52.529 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest_2/spdk 00:18:52.529 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest_2 00:18:52.529 HELP=0 00:18:52.529 DRY_RUN=0 00:18:52.529 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img, 00:18:52.529 NVME_DISKS_TYPE=nvme, 00:18:52.529 NVME_AUTO_CREATE=0 00:18:52.529 NVME_DISKS_NAMESPACES=, 00:18:52.529 NVME_CMB=, 00:18:52.529 NVME_PMR=, 00:18:52.529 NVME_ZNS=, 00:18:52.529 NVME_MS=, 00:18:52.529 NVME_FDP=, 00:18:52.529 SPDK_VAGRANT_DISTRO=ubuntu2404 00:18:52.529 SPDK_VAGRANT_VMCPU=10 00:18:52.529 SPDK_VAGRANT_VMRAM=12288 00:18:52.529 SPDK_VAGRANT_PROVIDER=libvirt 00:18:52.529 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:18:52.529 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:18:52.529 SPDK_OPENSTACK_NETWORK=0 00:18:52.529 VAGRANT_PACKAGE_BOX=0 00:18:52.529 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:18:52.529 FORCE_DISTRO=true 00:18:52.529 VAGRANT_BOX_VERSION= 00:18:52.529 EXTRA_VAGRANTFILES= 00:18:52.529 NIC_MODEL=e1000 00:18:52.529 00:18:52.529 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt' 00:18:52.529 /var/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest_2 00:18:55.827 Bringing machine 'default' up with 'libvirt' provider... 00:18:56.394 ==> default: Creating image (snapshot of base box volume). 00:18:56.653 ==> default: Creating domain with the following settings... 00:18:56.653 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1721663760_359a7d24048e864ce1f9 00:18:56.653 ==> default: -- Domain type: kvm 00:18:56.653 ==> default: -- Cpus: 10 00:18:56.653 ==> default: -- Feature: acpi 00:18:56.653 ==> default: -- Feature: apic 00:18:56.653 ==> default: -- Feature: pae 00:18:56.653 ==> default: -- Memory: 12288M 00:18:56.653 ==> default: -- Memory Backing: hugepages: 00:18:56.653 ==> default: -- Management MAC: 00:18:56.653 ==> default: -- Loader: 00:18:56.653 ==> default: -- Nvram: 00:18:56.653 ==> default: -- Base box: spdk/ubuntu2404 00:18:56.653 ==> default: -- Storage pool: default 00:18:56.653 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1721663760_359a7d24048e864ce1f9.img (20G) 00:18:56.653 ==> default: -- Volume Cache: default 00:18:56.653 ==> default: -- Kernel: 00:18:56.653 ==> default: -- Initrd: 00:18:56.653 ==> default: -- Graphics Type: vnc 00:18:56.653 ==> default: -- Graphics Port: -1 00:18:56.653 ==> default: -- Graphics IP: 127.0.0.1 00:18:56.653 ==> default: -- Graphics Password: Not defined 00:18:56.653 ==> default: -- Video Type: cirrus 00:18:56.653 ==> default: -- Video VRAM: 9216 00:18:56.653 ==> default: -- Sound Type: 00:18:56.653 ==> default: -- Keymap: en-us 00:18:56.653 ==> default: -- TPM Path: 00:18:56.653 ==> default: -- INPUT: type=mouse, bus=ps2 00:18:56.653 ==> default: -- Command line args: 00:18:56.653 ==> default: -> value=-device, 00:18:56.653 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:18:56.653 ==> default: -> value=-drive, 00:18:56.653 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:18:56.653 ==> default: -> value=-device, 00:18:56.653 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:18:56.912 ==> default: Creating shared folders metadata... 00:18:56.912 ==> default: Starting domain. 00:18:59.445 ==> default: Waiting for domain to get an IP address... 00:19:09.425 ==> default: Waiting for SSH to become available... 00:19:10.799 ==> default: Configuring and enabling network interfaces... 00:19:16.120 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:19:21.384 ==> default: Mounting SSHFS shared folder... 00:19:22.318 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:19:22.318 ==> default: Checking Mount.. 00:19:23.253 ==> default: Folder Successfully Mounted! 00:19:23.253 ==> default: Running provisioner: file... 00:19:23.510 default: ~/.gitconfig => .gitconfig 00:19:23.769 00:19:23.769 SUCCESS! 00:19:23.769 00:19:23.769 cd to /var/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:19:23.769 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:19:23.769 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt" to destroy all trace of vm. 00:19:23.769 00:19:23.778 [Pipeline] } 00:19:23.798 [Pipeline] // stage 00:19:23.809 [Pipeline] dir 00:19:23.810 Running in /var/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt 00:19:23.812 [Pipeline] { 00:19:23.827 [Pipeline] catchError 00:19:23.829 [Pipeline] { 00:19:23.844 [Pipeline] sh 00:19:24.124 + vagrant ssh-config --host vagrant 00:19:24.124 + sed -ne /^Host/,$p 00:19:24.124 + tee ssh_conf 00:19:27.409 Host vagrant 00:19:27.409 HostName 192.168.121.209 00:19:27.409 User vagrant 00:19:27.409 Port 22 00:19:27.409 UserKnownHostsFile /dev/null 00:19:27.409 StrictHostKeyChecking no 00:19:27.409 PasswordAuthentication no 00:19:27.409 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:19:27.409 IdentitiesOnly yes 00:19:27.409 LogLevel FATAL 00:19:27.409 ForwardAgent yes 00:19:27.409 ForwardX11 yes 00:19:27.409 00:19:27.422 [Pipeline] withEnv 00:19:27.425 [Pipeline] { 00:19:27.441 [Pipeline] sh 00:19:27.722 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:19:27.722 source /etc/os-release 00:19:27.722 [[ -e /image.version ]] && img=$(< /image.version) 00:19:27.722 # Minimal, systemd-like check. 00:19:27.722 if [[ -e /.dockerenv ]]; then 00:19:27.722 # Clear garbage from the node's name: 00:19:27.722 # agt-er_autotest_547-896 -> autotest_547-896 00:19:27.722 # $HOSTNAME is the actual container id 00:19:27.722 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:19:27.722 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:19:27.722 # We can assume this is a mount from a host where container is running, 00:19:27.722 # so fetch its hostname to easily identify the target swarm worker. 00:19:27.722 container="$(< /etc/hostname) ($agent)" 00:19:27.722 else 00:19:27.722 # Fallback 00:19:27.722 container=$agent 00:19:27.722 fi 00:19:27.722 fi 00:19:27.722 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:19:27.722 00:19:28.051 [Pipeline] } 00:19:28.071 [Pipeline] // withEnv 00:19:28.081 [Pipeline] setCustomBuildProperty 00:19:28.096 [Pipeline] stage 00:19:28.099 [Pipeline] { (Tests) 00:19:28.118 [Pipeline] sh 00:19:28.398 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:19:28.669 [Pipeline] sh 00:19:28.946 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:19:29.217 [Pipeline] timeout 00:19:29.217 Timeout set to expire in 1 hr 30 min 00:19:29.219 [Pipeline] { 00:19:29.232 [Pipeline] sh 00:19:29.512 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:19:30.079 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:19:30.092 [Pipeline] sh 00:19:30.373 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:19:30.645 [Pipeline] sh 00:19:30.925 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:19:31.198 [Pipeline] sh 00:19:31.478 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:19:31.736 ++ readlink -f spdk_repo 00:19:31.736 + DIR_ROOT=/home/vagrant/spdk_repo 00:19:31.736 + [[ -n /home/vagrant/spdk_repo ]] 00:19:31.736 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:19:31.736 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:19:31.736 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:19:31.736 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:19:31.736 + [[ -d /home/vagrant/spdk_repo/output ]] 00:19:31.736 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:19:31.736 + cd /home/vagrant/spdk_repo 00:19:31.736 + source /etc/os-release 00:19:31.736 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:19:31.736 ++ NAME=Ubuntu 00:19:31.736 ++ VERSION_ID=24.04 00:19:31.736 ++ VERSION='24.04 LTS (Noble Numbat)' 00:19:31.736 ++ VERSION_CODENAME=noble 00:19:31.736 ++ ID=ubuntu 00:19:31.736 ++ ID_LIKE=debian 00:19:31.736 ++ HOME_URL=https://www.ubuntu.com/ 00:19:31.736 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:19:31.736 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:19:31.736 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:19:31.736 ++ UBUNTU_CODENAME=noble 00:19:31.736 ++ LOGO=ubuntu-logo 00:19:31.736 + uname -a 00:19:31.736 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:19:31.736 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:19:31.736 Hugepages 00:19:31.736 node hugesize free / total 00:19:31.736 node0 1048576kB 0 / 0 00:19:31.736 node0 2048kB 0 / 0 00:19:31.736 00:19:31.736 Type BDF Vendor Device NUMA Driver Device Block devices 00:19:31.995 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:19:31.996 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:19:31.996 + rm -f /tmp/spdk-ld-path 00:19:31.996 + source autorun-spdk.conf 00:19:31.996 ++ SPDK_TEST_UNITTEST=1 00:19:31.996 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:19:31.996 ++ SPDK_TEST_BLOCKDEV=1 00:19:31.996 ++ SPDK_RUN_ASAN=1 00:19:31.996 ++ SPDK_RUN_UBSAN=1 00:19:31.996 ++ SPDK_TEST_RAID5=1 00:19:31.996 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:31.996 ++ RUN_NIGHTLY=1 00:19:31.996 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:19:31.996 + [[ -n '' ]] 00:19:31.996 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:19:31.996 + for M in /var/spdk/build-*-manifest.txt 00:19:31.996 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:19:31.996 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:19:31.996 + for M in /var/spdk/build-*-manifest.txt 00:19:31.996 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:19:31.996 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:19:31.996 ++ uname 00:19:31.996 + [[ Linux == \L\i\n\u\x ]] 00:19:31.996 + sudo dmesg -T 00:19:31.996 + sudo dmesg --clear 00:19:31.996 + dmesg_pid=2366 00:19:31.996 + sudo dmesg -Tw 00:19:31.996 + [[ Ubuntu == FreeBSD ]] 00:19:31.996 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:31.996 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:31.996 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:19:31.996 + [[ -x /usr/src/fio-static/fio ]] 00:19:31.996 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:19:31.996 + [[ ! -v VFIO_QEMU_BIN ]] 00:19:31.996 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:19:31.996 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:19:31.996 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:19:31.996 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:19:31.996 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:19:31.996 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:31.996 Test configuration: 00:19:31.996 SPDK_TEST_UNITTEST=1 00:19:31.996 SPDK_RUN_FUNCTIONAL_TEST=1 00:19:31.996 SPDK_TEST_BLOCKDEV=1 00:19:31.996 SPDK_RUN_ASAN=1 00:19:31.996 SPDK_RUN_UBSAN=1 00:19:31.996 SPDK_TEST_RAID5=1 00:19:31.996 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:31.996 RUN_NIGHTLY=1 15:56:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.996 15:56:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:31.996 15:56:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.996 15:56:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.996 15:56:35 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:31.996 15:56:35 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:31.996 15:56:35 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:31.996 15:56:35 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:31.996 15:56:35 -- paths/export.sh@6 -- $ export PATH 00:19:31.996 15:56:35 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:19:31.996 15:56:35 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:31.996 15:56:35 -- common/autobuild_common.sh@438 -- $ date +%s 00:19:31.996 15:56:35 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721663795.XXXXXX 00:19:31.996 15:56:35 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721663795.5s9x40 00:19:31.996 15:56:35 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:19:31.996 15:56:35 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:19:31.996 15:56:35 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:31.996 15:56:35 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:31.996 15:56:35 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:31.996 15:56:35 -- common/autobuild_common.sh@454 -- $ get_config_params 00:19:31.996 15:56:35 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:19:31.996 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:31.996 15:56:35 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:19:31.996 15:56:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:19:31.996 15:56:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:19:31.996 15:56:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:31.996 15:56:35 -- spdk/autobuild.sh@16 -- $ date -u 00:19:31.996 Mon Jul 22 15:56:35 UTC 2024 00:19:31.996 15:56:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:19:32.254 LTS-60-gdbef7efac 00:19:32.254 15:56:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:19:32.254 15:56:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:19:32.254 15:56:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:19:32.254 15:56:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:19:32.254 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:32.254 ************************************ 00:19:32.254 START TEST asan 00:19:32.254 ************************************ 00:19:32.254 using asan 00:19:32.254 15:56:35 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:19:32.254 00:19:32.254 real 0m0.000s 00:19:32.254 user 0m0.000s 00:19:32.254 sys 0m0.000s 00:19:32.254 15:56:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:19:32.254 ************************************ 00:19:32.254 END TEST asan 00:19:32.254 ************************************ 00:19:32.254 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:32.254 15:56:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:19:32.254 15:56:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:19:32.254 15:56:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:19:32.254 15:56:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:19:32.254 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:32.254 ************************************ 00:19:32.254 START TEST ubsan 00:19:32.254 ************************************ 00:19:32.254 using ubsan 00:19:32.254 15:56:35 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:19:32.254 00:19:32.254 real 0m0.000s 00:19:32.254 user 0m0.000s 00:19:32.254 sys 0m0.000s 00:19:32.254 15:56:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:19:32.254 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:32.254 ************************************ 00:19:32.254 END TEST ubsan 00:19:32.254 ************************************ 00:19:32.254 15:56:35 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:19:32.254 15:56:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:19:32.254 15:56:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:19:32.254 15:56:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:19:32.254 15:56:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:19:32.254 15:56:35 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:19:32.254 15:56:35 -- spdk/autobuild.sh@58 -- $ unittest_build 00:19:32.254 15:56:35 -- common/autobuild_common.sh@414 -- $ run_test unittest_build _unittest_build 00:19:32.254 15:56:35 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:19:32.254 15:56:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:19:32.254 15:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:19:32.254 ************************************ 00:19:32.254 START TEST unittest_build 00:19:32.254 ************************************ 00:19:32.254 15:56:35 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:19:32.254 15:56:35 -- common/autobuild_common.sh@405 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --without-shared 00:19:32.254 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:32.254 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:32.821 Using 'verbs' RDMA provider 00:19:48.658 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:20:00.858 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:20:01.424 Creating mk/config.mk...done. 00:20:01.424 Creating mk/cc.flags.mk...done. 00:20:01.424 Type 'make' to build. 00:20:01.424 15:57:05 -- common/autobuild_common.sh@406 -- $ make -j10 00:20:01.424 make[1]: Nothing to be done for 'all'. 00:20:13.713 The Meson build system 00:20:13.713 Version: 1.4.1 00:20:13.713 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:20:13.713 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:20:13.713 Build type: native build 00:20:13.713 Program cat found: YES (/usr/bin/cat) 00:20:13.713 Project name: DPDK 00:20:13.713 Project version: 23.11.0 00:20:13.713 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:20:13.713 C linker for the host machine: cc ld.bfd 2.42 00:20:13.713 Host machine cpu family: x86_64 00:20:13.713 Host machine cpu: x86_64 00:20:13.713 Message: ## Building in Developer Mode ## 00:20:13.713 Program pkg-config found: YES (/usr/bin/pkg-config) 00:20:13.713 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:20:13.713 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:20:13.713 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:20:13.713 Program cat found: YES (/usr/bin/cat) 00:20:13.713 Compiler for C supports arguments -march=native: YES 00:20:13.713 Checking for size of "void *" : 8 00:20:13.713 Checking for size of "void *" : 8 (cached) 00:20:13.713 Library m found: YES 00:20:13.713 Library numa found: YES 00:20:13.713 Has header "numaif.h" : YES 00:20:13.713 Library fdt found: NO 00:20:13.713 Library execinfo found: NO 00:20:13.713 Has header "execinfo.h" : YES 00:20:13.713 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:20:13.713 Run-time dependency libarchive found: NO (tried pkgconfig) 00:20:13.713 Run-time dependency libbsd found: NO (tried pkgconfig) 00:20:13.713 Run-time dependency jansson found: NO (tried pkgconfig) 00:20:13.713 Run-time dependency openssl found: YES 3.0.13 00:20:13.713 Run-time dependency libpcap found: NO (tried pkgconfig) 00:20:13.713 Library pcap found: NO 00:20:13.713 Compiler for C supports arguments -Wcast-qual: YES 00:20:13.713 Compiler for C supports arguments -Wdeprecated: YES 00:20:13.713 Compiler for C supports arguments -Wformat: YES 00:20:13.713 Compiler for C supports arguments -Wformat-nonliteral: YES 00:20:13.713 Compiler for C supports arguments -Wformat-security: YES 00:20:13.713 Compiler for C supports arguments -Wmissing-declarations: YES 00:20:13.713 Compiler for C supports arguments -Wmissing-prototypes: YES 00:20:13.713 Compiler for C supports arguments -Wnested-externs: YES 00:20:13.713 Compiler for C supports arguments -Wold-style-definition: YES 00:20:13.713 Compiler for C supports arguments -Wpointer-arith: YES 00:20:13.713 Compiler for C supports arguments -Wsign-compare: YES 00:20:13.713 Compiler for C supports arguments -Wstrict-prototypes: YES 00:20:13.713 Compiler for C supports arguments -Wundef: YES 00:20:13.713 Compiler for C supports arguments -Wwrite-strings: YES 00:20:13.713 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:20:13.713 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:20:13.713 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:20:13.713 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:20:13.713 Program objdump found: YES (/usr/bin/objdump) 00:20:13.713 Compiler for C supports arguments -mavx512f: YES 00:20:13.713 Checking if "AVX512 checking" compiles: YES 00:20:13.713 Fetching value of define "__SSE4_2__" : 1 00:20:13.713 Fetching value of define "__AES__" : 1 00:20:13.713 Fetching value of define "__AVX__" : 1 00:20:13.713 Fetching value of define "__AVX2__" : 1 00:20:13.713 Fetching value of define "__AVX512BW__" : (undefined) 00:20:13.713 Fetching value of define "__AVX512CD__" : (undefined) 00:20:13.713 Fetching value of define "__AVX512DQ__" : (undefined) 00:20:13.713 Fetching value of define "__AVX512F__" : (undefined) 00:20:13.713 Fetching value of define "__AVX512VL__" : (undefined) 00:20:13.713 Fetching value of define "__PCLMUL__" : 1 00:20:13.713 Fetching value of define "__RDRND__" : 1 00:20:13.713 Fetching value of define "__RDSEED__" : 1 00:20:13.713 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:20:13.713 Fetching value of define "__znver1__" : (undefined) 00:20:13.713 Fetching value of define "__znver2__" : (undefined) 00:20:13.713 Fetching value of define "__znver3__" : (undefined) 00:20:13.713 Fetching value of define "__znver4__" : (undefined) 00:20:13.714 Library asan found: YES 00:20:13.714 Compiler for C supports arguments -Wno-format-truncation: YES 00:20:13.714 Message: lib/log: Defining dependency "log" 00:20:13.714 Message: lib/kvargs: Defining dependency "kvargs" 00:20:13.714 Message: lib/telemetry: Defining dependency "telemetry" 00:20:13.714 Library rt found: YES 00:20:13.714 Checking for function "getentropy" : NO 00:20:13.714 Message: lib/eal: Defining dependency "eal" 00:20:13.714 Message: lib/ring: Defining dependency "ring" 00:20:13.714 Message: lib/rcu: Defining dependency "rcu" 00:20:13.714 Message: lib/mempool: Defining dependency "mempool" 00:20:13.714 Message: lib/mbuf: Defining dependency "mbuf" 00:20:13.714 Fetching value of define "__PCLMUL__" : 1 (cached) 00:20:13.714 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:20:13.714 Compiler for C supports arguments -mpclmul: YES 00:20:13.714 Compiler for C supports arguments -maes: YES 00:20:13.714 Compiler for C supports arguments -mavx512f: YES (cached) 00:20:13.714 Compiler for C supports arguments -mavx512bw: YES 00:20:13.714 Compiler for C supports arguments -mavx512dq: YES 00:20:13.714 Compiler for C supports arguments -mavx512vl: YES 00:20:13.714 Compiler for C supports arguments -mvpclmulqdq: YES 00:20:13.714 Compiler for C supports arguments -mavx2: YES 00:20:13.714 Compiler for C supports arguments -mavx: YES 00:20:13.714 Message: lib/net: Defining dependency "net" 00:20:13.714 Message: lib/meter: Defining dependency "meter" 00:20:13.714 Message: lib/ethdev: Defining dependency "ethdev" 00:20:13.714 Message: lib/pci: Defining dependency "pci" 00:20:13.714 Message: lib/cmdline: Defining dependency "cmdline" 00:20:13.714 Message: lib/hash: Defining dependency "hash" 00:20:13.714 Message: lib/timer: Defining dependency "timer" 00:20:13.714 Message: lib/compressdev: Defining dependency "compressdev" 00:20:13.714 Message: lib/cryptodev: Defining dependency "cryptodev" 00:20:13.714 Message: lib/dmadev: Defining dependency "dmadev" 00:20:13.714 Compiler for C supports arguments -Wno-cast-qual: YES 00:20:13.714 Message: lib/power: Defining dependency "power" 00:20:13.714 Message: lib/reorder: Defining dependency "reorder" 00:20:13.714 Message: lib/security: Defining dependency "security" 00:20:13.714 Has header "linux/userfaultfd.h" : YES 00:20:13.714 Has header "linux/vduse.h" : YES 00:20:13.714 Message: lib/vhost: Defining dependency "vhost" 00:20:13.714 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:20:13.714 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:20:13.714 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:20:13.714 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:20:13.714 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:20:13.714 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:20:13.714 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:20:13.714 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:20:13.714 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:20:13.714 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:20:13.714 Program doxygen found: YES (/usr/bin/doxygen) 00:20:13.714 Configuring doxy-api-html.conf using configuration 00:20:13.714 Configuring doxy-api-man.conf using configuration 00:20:13.714 Program mandb found: YES (/usr/bin/mandb) 00:20:13.714 Program sphinx-build found: NO 00:20:13.714 Configuring rte_build_config.h using configuration 00:20:13.714 Message: 00:20:13.714 ================= 00:20:13.714 Applications Enabled 00:20:13.714 ================= 00:20:13.714 00:20:13.714 apps: 00:20:13.714 00:20:13.714 00:20:13.714 Message: 00:20:13.714 ================= 00:20:13.714 Libraries Enabled 00:20:13.714 ================= 00:20:13.714 00:20:13.714 libs: 00:20:13.714 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:20:13.714 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:20:13.714 cryptodev, dmadev, power, reorder, security, vhost, 00:20:13.714 00:20:13.714 Message: 00:20:13.714 =============== 00:20:13.714 Drivers Enabled 00:20:13.714 =============== 00:20:13.714 00:20:13.714 common: 00:20:13.714 00:20:13.714 bus: 00:20:13.714 pci, vdev, 00:20:13.714 mempool: 00:20:13.714 ring, 00:20:13.714 dma: 00:20:13.714 00:20:13.714 net: 00:20:13.714 00:20:13.714 crypto: 00:20:13.714 00:20:13.714 compress: 00:20:13.714 00:20:13.714 vdpa: 00:20:13.714 00:20:13.714 00:20:13.714 Message: 00:20:13.714 ================= 00:20:13.714 Content Skipped 00:20:13.714 ================= 00:20:13.714 00:20:13.714 apps: 00:20:13.714 dumpcap: explicitly disabled via build config 00:20:13.714 graph: explicitly disabled via build config 00:20:13.714 pdump: explicitly disabled via build config 00:20:13.714 proc-info: explicitly disabled via build config 00:20:13.714 test-acl: explicitly disabled via build config 00:20:13.714 test-bbdev: explicitly disabled via build config 00:20:13.714 test-cmdline: explicitly disabled via build config 00:20:13.714 test-compress-perf: explicitly disabled via build config 00:20:13.714 test-crypto-perf: explicitly disabled via build config 00:20:13.714 test-dma-perf: explicitly disabled via build config 00:20:13.714 test-eventdev: explicitly disabled via build config 00:20:13.714 test-fib: explicitly disabled via build config 00:20:13.714 test-flow-perf: explicitly disabled via build config 00:20:13.714 test-gpudev: explicitly disabled via build config 00:20:13.714 test-mldev: explicitly disabled via build config 00:20:13.714 test-pipeline: explicitly disabled via build config 00:20:13.714 test-pmd: explicitly disabled via build config 00:20:13.714 test-regex: explicitly disabled via build config 00:20:13.714 test-sad: explicitly disabled via build config 00:20:13.714 test-security-perf: explicitly disabled via build config 00:20:13.714 00:20:13.714 libs: 00:20:13.714 metrics: explicitly disabled via build config 00:20:13.714 acl: explicitly disabled via build config 00:20:13.714 bbdev: explicitly disabled via build config 00:20:13.714 bitratestats: explicitly disabled via build config 00:20:13.714 bpf: explicitly disabled via build config 00:20:13.714 cfgfile: explicitly disabled via build config 00:20:13.714 distributor: explicitly disabled via build config 00:20:13.714 efd: explicitly disabled via build config 00:20:13.714 eventdev: explicitly disabled via build config 00:20:13.714 dispatcher: explicitly disabled via build config 00:20:13.714 gpudev: explicitly disabled via build config 00:20:13.714 gro: explicitly disabled via build config 00:20:13.714 gso: explicitly disabled via build config 00:20:13.714 ip_frag: explicitly disabled via build config 00:20:13.714 jobstats: explicitly disabled via build config 00:20:13.714 latencystats: explicitly disabled via build config 00:20:13.714 lpm: explicitly disabled via build config 00:20:13.714 member: explicitly disabled via build config 00:20:13.714 pcapng: explicitly disabled via build config 00:20:13.714 rawdev: explicitly disabled via build config 00:20:13.714 regexdev: explicitly disabled via build config 00:20:13.714 mldev: explicitly disabled via build config 00:20:13.714 rib: explicitly disabled via build config 00:20:13.714 sched: explicitly disabled via build config 00:20:13.714 stack: explicitly disabled via build config 00:20:13.714 ipsec: explicitly disabled via build config 00:20:13.714 pdcp: explicitly disabled via build config 00:20:13.714 fib: explicitly disabled via build config 00:20:13.714 port: explicitly disabled via build config 00:20:13.714 pdump: explicitly disabled via build config 00:20:13.714 table: explicitly disabled via build config 00:20:13.714 pipeline: explicitly disabled via build config 00:20:13.714 graph: explicitly disabled via build config 00:20:13.714 node: explicitly disabled via build config 00:20:13.714 00:20:13.714 drivers: 00:20:13.714 common/cpt: not in enabled drivers build config 00:20:13.714 common/dpaax: not in enabled drivers build config 00:20:13.714 common/iavf: not in enabled drivers build config 00:20:13.714 common/idpf: not in enabled drivers build config 00:20:13.714 common/mvep: not in enabled drivers build config 00:20:13.714 common/octeontx: not in enabled drivers build config 00:20:13.715 bus/auxiliary: not in enabled drivers build config 00:20:13.715 bus/cdx: not in enabled drivers build config 00:20:13.715 bus/dpaa: not in enabled drivers build config 00:20:13.715 bus/fslmc: not in enabled drivers build config 00:20:13.715 bus/ifpga: not in enabled drivers build config 00:20:13.715 bus/platform: not in enabled drivers build config 00:20:13.715 bus/vmbus: not in enabled drivers build config 00:20:13.715 common/cnxk: not in enabled drivers build config 00:20:13.715 common/mlx5: not in enabled drivers build config 00:20:13.715 common/nfp: not in enabled drivers build config 00:20:13.715 common/qat: not in enabled drivers build config 00:20:13.715 common/sfc_efx: not in enabled drivers build config 00:20:13.715 mempool/bucket: not in enabled drivers build config 00:20:13.715 mempool/cnxk: not in enabled drivers build config 00:20:13.715 mempool/dpaa: not in enabled drivers build config 00:20:13.715 mempool/dpaa2: not in enabled drivers build config 00:20:13.715 mempool/octeontx: not in enabled drivers build config 00:20:13.715 mempool/stack: not in enabled drivers build config 00:20:13.715 dma/cnxk: not in enabled drivers build config 00:20:13.715 dma/dpaa: not in enabled drivers build config 00:20:13.715 dma/dpaa2: not in enabled drivers build config 00:20:13.715 dma/hisilicon: not in enabled drivers build config 00:20:13.715 dma/idxd: not in enabled drivers build config 00:20:13.715 dma/ioat: not in enabled drivers build config 00:20:13.715 dma/skeleton: not in enabled drivers build config 00:20:13.715 net/af_packet: not in enabled drivers build config 00:20:13.715 net/af_xdp: not in enabled drivers build config 00:20:13.715 net/ark: not in enabled drivers build config 00:20:13.715 net/atlantic: not in enabled drivers build config 00:20:13.715 net/avp: not in enabled drivers build config 00:20:13.715 net/axgbe: not in enabled drivers build config 00:20:13.715 net/bnx2x: not in enabled drivers build config 00:20:13.715 net/bnxt: not in enabled drivers build config 00:20:13.715 net/bonding: not in enabled drivers build config 00:20:13.715 net/cnxk: not in enabled drivers build config 00:20:13.715 net/cpfl: not in enabled drivers build config 00:20:13.715 net/cxgbe: not in enabled drivers build config 00:20:13.715 net/dpaa: not in enabled drivers build config 00:20:13.715 net/dpaa2: not in enabled drivers build config 00:20:13.715 net/e1000: not in enabled drivers build config 00:20:13.715 net/ena: not in enabled drivers build config 00:20:13.715 net/enetc: not in enabled drivers build config 00:20:13.715 net/enetfec: not in enabled drivers build config 00:20:13.715 net/enic: not in enabled drivers build config 00:20:13.715 net/failsafe: not in enabled drivers build config 00:20:13.715 net/fm10k: not in enabled drivers build config 00:20:13.715 net/gve: not in enabled drivers build config 00:20:13.715 net/hinic: not in enabled drivers build config 00:20:13.715 net/hns3: not in enabled drivers build config 00:20:13.715 net/i40e: not in enabled drivers build config 00:20:13.715 net/iavf: not in enabled drivers build config 00:20:13.715 net/ice: not in enabled drivers build config 00:20:13.715 net/idpf: not in enabled drivers build config 00:20:13.715 net/igc: not in enabled drivers build config 00:20:13.715 net/ionic: not in enabled drivers build config 00:20:13.715 net/ipn3ke: not in enabled drivers build config 00:20:13.715 net/ixgbe: not in enabled drivers build config 00:20:13.715 net/mana: not in enabled drivers build config 00:20:13.715 net/memif: not in enabled drivers build config 00:20:13.715 net/mlx4: not in enabled drivers build config 00:20:13.715 net/mlx5: not in enabled drivers build config 00:20:13.715 net/mvneta: not in enabled drivers build config 00:20:13.715 net/mvpp2: not in enabled drivers build config 00:20:13.715 net/netvsc: not in enabled drivers build config 00:20:13.715 net/nfb: not in enabled drivers build config 00:20:13.715 net/nfp: not in enabled drivers build config 00:20:13.715 net/ngbe: not in enabled drivers build config 00:20:13.715 net/null: not in enabled drivers build config 00:20:13.715 net/octeontx: not in enabled drivers build config 00:20:13.715 net/octeon_ep: not in enabled drivers build config 00:20:13.715 net/pcap: not in enabled drivers build config 00:20:13.715 net/pfe: not in enabled drivers build config 00:20:13.715 net/qede: not in enabled drivers build config 00:20:13.715 net/ring: not in enabled drivers build config 00:20:13.715 net/sfc: not in enabled drivers build config 00:20:13.715 net/softnic: not in enabled drivers build config 00:20:13.715 net/tap: not in enabled drivers build config 00:20:13.715 net/thunderx: not in enabled drivers build config 00:20:13.715 net/txgbe: not in enabled drivers build config 00:20:13.715 net/vdev_netvsc: not in enabled drivers build config 00:20:13.715 net/vhost: not in enabled drivers build config 00:20:13.715 net/virtio: not in enabled drivers build config 00:20:13.715 net/vmxnet3: not in enabled drivers build config 00:20:13.715 raw/*: missing internal dependency, "rawdev" 00:20:13.715 crypto/armv8: not in enabled drivers build config 00:20:13.715 crypto/bcmfs: not in enabled drivers build config 00:20:13.715 crypto/caam_jr: not in enabled drivers build config 00:20:13.715 crypto/ccp: not in enabled drivers build config 00:20:13.715 crypto/cnxk: not in enabled drivers build config 00:20:13.715 crypto/dpaa_sec: not in enabled drivers build config 00:20:13.715 crypto/dpaa2_sec: not in enabled drivers build config 00:20:13.715 crypto/ipsec_mb: not in enabled drivers build config 00:20:13.715 crypto/mlx5: not in enabled drivers build config 00:20:13.715 crypto/mvsam: not in enabled drivers build config 00:20:13.715 crypto/nitrox: not in enabled drivers build config 00:20:13.715 crypto/null: not in enabled drivers build config 00:20:13.715 crypto/octeontx: not in enabled drivers build config 00:20:13.715 crypto/openssl: not in enabled drivers build config 00:20:13.715 crypto/scheduler: not in enabled drivers build config 00:20:13.715 crypto/uadk: not in enabled drivers build config 00:20:13.715 crypto/virtio: not in enabled drivers build config 00:20:13.715 compress/isal: not in enabled drivers build config 00:20:13.715 compress/mlx5: not in enabled drivers build config 00:20:13.715 compress/octeontx: not in enabled drivers build config 00:20:13.715 compress/zlib: not in enabled drivers build config 00:20:13.715 regex/*: missing internal dependency, "regexdev" 00:20:13.715 ml/*: missing internal dependency, "mldev" 00:20:13.715 vdpa/ifc: not in enabled drivers build config 00:20:13.715 vdpa/mlx5: not in enabled drivers build config 00:20:13.715 vdpa/nfp: not in enabled drivers build config 00:20:13.715 vdpa/sfc: not in enabled drivers build config 00:20:13.715 event/*: missing internal dependency, "eventdev" 00:20:13.715 baseband/*: missing internal dependency, "bbdev" 00:20:13.715 gpu/*: missing internal dependency, "gpudev" 00:20:13.715 00:20:13.715 00:20:13.715 Build targets in project: 85 00:20:13.715 00:20:13.715 DPDK 23.11.0 00:20:13.715 00:20:13.715 User defined options 00:20:13.715 buildtype : debug 00:20:13.715 default_library : static 00:20:13.715 libdir : lib 00:20:13.715 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:13.715 b_sanitize : address 00:20:13.715 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:20:13.715 c_link_args : 00:20:13.715 cpu_instruction_set: native 00:20:13.715 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:20:13.715 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:20:13.715 enable_docs : false 00:20:13.715 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:20:13.715 enable_kmods : false 00:20:13.715 tests : false 00:20:13.715 00:20:13.715 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:20:13.715 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:20:13.974 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:20:13.974 [2/265] Linking static target lib/librte_kvargs.a 00:20:13.974 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:20:13.974 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:20:13.974 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:20:13.974 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:20:13.974 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:20:13.974 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:20:14.233 [9/265] Linking static target lib/librte_log.a 00:20:14.233 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:20:14.491 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:20:14.491 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:20:14.491 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:20:14.491 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:20:14.491 [15/265] Linking static target lib/librte_telemetry.a 00:20:14.750 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:20:14.750 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:20:15.008 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:20:15.008 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:20:15.008 [20/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:20:15.008 [21/265] Linking target lib/librte_log.so.24.0 00:20:15.009 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:20:15.009 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:20:15.267 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:20:15.267 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:20:15.267 [26/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:20:15.267 [27/265] Linking target lib/librte_kvargs.so.24.0 00:20:15.267 [28/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:20:15.526 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:20:15.526 [30/265] Linking target lib/librte_telemetry.so.24.0 00:20:15.526 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:20:15.526 [32/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:20:15.526 [33/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:20:15.526 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:20:15.526 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:20:15.785 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:20:15.785 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:20:15.785 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:20:16.047 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:20:16.047 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:20:16.047 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:20:16.047 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:20:16.047 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:20:16.047 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:20:16.047 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:20:16.047 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:20:16.311 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:20:16.311 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:20:16.573 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:20:16.573 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:20:16.573 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:20:16.573 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:20:16.832 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:20:16.832 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:20:16.832 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:20:16.832 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:20:16.832 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:20:16.832 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:20:16.832 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:20:17.092 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:20:17.092 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:20:17.092 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:20:17.092 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:20:17.092 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:20:17.351 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:20:17.351 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:20:17.351 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:20:17.351 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:20:17.351 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:20:17.610 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:20:17.610 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:20:17.610 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:20:17.610 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:20:17.610 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:20:17.610 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:20:17.610 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:20:17.869 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:20:17.869 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:20:17.869 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:20:17.869 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:20:18.127 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:20:18.127 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:20:18.127 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:20:18.385 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:20:18.385 [85/265] Linking static target lib/librte_eal.a 00:20:18.385 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:20:18.385 [87/265] Linking static target lib/librte_ring.a 00:20:18.385 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:20:18.646 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:20:18.646 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:20:18.646 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:20:18.646 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:20:18.646 [93/265] Linking static target lib/librte_mempool.a 00:20:18.646 [94/265] Linking static target lib/librte_rcu.a 00:20:18.646 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:20:18.904 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:20:18.904 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:20:19.162 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:20:19.162 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:20:19.162 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:20:19.162 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:20:19.421 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:20:19.421 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:20:19.421 [104/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:20:19.421 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:20:19.421 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:20:19.421 [107/265] Linking static target lib/librte_mbuf.a 00:20:19.421 [108/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:20:19.679 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:20:19.679 [110/265] Linking static target lib/librte_net.a 00:20:19.679 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:20:19.679 [112/265] Linking static target lib/librte_meter.a 00:20:19.938 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:20:19.938 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:20:19.938 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:20:19.938 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:20:20.196 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:20:20.196 [118/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:20:20.196 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:20:20.454 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:20:20.712 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:20:20.712 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:20:20.970 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:20:20.970 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:20:20.970 [125/265] Linking static target lib/librte_pci.a 00:20:20.970 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:20:20.970 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:20:20.970 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:20:21.228 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:20:21.228 [130/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:20:21.228 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:20:21.228 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:20:21.228 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:20:21.228 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:20:21.228 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:20:21.228 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:20:21.229 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:20:21.486 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:20:21.486 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:20:21.486 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:20:21.486 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:20:21.486 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:20:21.743 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:20:21.743 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:20:21.743 [145/265] Linking static target lib/librte_cmdline.a 00:20:22.001 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:20:22.001 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:20:22.001 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:20:22.001 [149/265] Linking static target lib/librte_timer.a 00:20:22.259 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:20:22.259 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:20:22.259 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:20:22.518 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:20:22.518 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:20:22.518 [155/265] Linking static target lib/librte_ethdev.a 00:20:22.518 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:20:22.777 [157/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:20:22.777 [158/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:20:22.777 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:20:22.777 [160/265] Linking static target lib/librte_hash.a 00:20:22.777 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:20:22.777 [162/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:20:22.777 [163/265] Linking static target lib/librte_compressdev.a 00:20:23.035 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:20:23.035 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:20:23.035 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:20:23.294 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:20:23.294 [168/265] Linking static target lib/librte_dmadev.a 00:20:23.294 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:20:23.294 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:20:23.294 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:23.294 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:20:23.552 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:23.552 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:20:23.810 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:20:23.810 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:20:24.069 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:20:24.069 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:20:24.069 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:20:24.069 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:20:24.069 [181/265] Linking static target lib/librte_cryptodev.a 00:20:24.069 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:20:24.069 [183/265] Linking static target lib/librte_power.a 00:20:24.636 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:20:24.636 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:20:24.636 [186/265] Linking static target lib/librte_reorder.a 00:20:24.636 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:20:24.636 [188/265] Linking static target lib/librte_security.a 00:20:24.636 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:20:24.636 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:20:24.896 [191/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:20:24.896 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:20:24.896 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:20:25.154 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:20:25.154 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:20:25.413 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:25.413 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:20:25.413 [198/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:20:25.671 [199/265] Linking target lib/librte_eal.so.24.0 00:20:25.671 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:20:25.671 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:20:25.671 [202/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:20:25.671 [203/265] Linking target lib/librte_ring.so.24.0 00:20:25.671 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:20:25.929 [205/265] Linking target lib/librte_meter.so.24.0 00:20:25.929 [206/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:20:25.929 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:20:25.929 [208/265] Linking target lib/librte_rcu.so.24.0 00:20:25.929 [209/265] Linking target lib/librte_mempool.so.24.0 00:20:25.929 [210/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:20:25.929 [211/265] Linking target lib/librte_pci.so.24.0 00:20:25.929 [212/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:20:26.187 [213/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:20:26.187 [214/265] Linking target lib/librte_timer.so.24.0 00:20:26.187 [215/265] Linking target lib/librte_mbuf.so.24.0 00:20:26.187 [216/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:20:26.187 [217/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:20:26.188 [218/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:20:26.188 [219/265] Linking target lib/librte_dmadev.so.24.0 00:20:26.188 [220/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:20:26.188 [221/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:20:26.188 [222/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:20:26.188 [223/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:20:26.188 [224/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:20:26.188 [225/265] Linking target lib/librte_compressdev.so.24.0 00:20:26.188 [226/265] Linking target lib/librte_net.so.24.0 00:20:26.188 [227/265] Linking target lib/librte_cryptodev.so.24.0 00:20:26.188 [228/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:20:26.188 [229/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:20:26.446 [230/265] Linking target lib/librte_reorder.so.24.0 00:20:26.446 [231/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:20:26.446 [232/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:20:26.446 [233/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:20:26.446 [234/265] Linking target lib/librte_hash.so.24.0 00:20:26.446 [235/265] Linking target lib/librte_cmdline.so.24.0 00:20:26.446 [236/265] Linking target lib/librte_security.so.24.0 00:20:26.446 [237/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:20:26.446 [238/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:20:26.446 [239/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:20:26.446 [240/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:20:26.446 [241/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:20:26.446 [242/265] Linking static target drivers/librte_bus_vdev.a 00:20:26.446 [243/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:20:26.704 [244/265] Linking static target drivers/librte_bus_pci.a 00:20:26.704 [245/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:20:26.962 [246/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:26.962 [247/265] Linking target drivers/librte_bus_vdev.so.24.0 00:20:27.220 [248/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:20:27.220 [249/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:20:27.220 [250/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:20:27.220 [251/265] Linking target drivers/librte_bus_pci.so.24.0 00:20:27.220 [252/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:20:27.479 [253/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:20:27.479 [254/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:20:27.479 [255/265] Linking static target drivers/librte_mempool_ring.a 00:20:27.479 [256/265] Linking target drivers/librte_mempool_ring.so.24.0 00:20:28.044 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:28.302 [258/265] Linking target lib/librte_ethdev.so.24.0 00:20:28.302 [259/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:20:28.302 [260/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:20:28.568 [261/265] Linking target lib/librte_power.so.24.0 00:20:32.752 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:20:32.752 [263/265] Linking static target lib/librte_vhost.a 00:20:34.655 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:20:34.655 [265/265] Linking target lib/librte_vhost.so.24.0 00:20:34.655 INFO: autodetecting backend as ninja 00:20:34.655 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:20:35.594 CC lib/ut_mock/mock.o 00:20:35.594 CC lib/ut/ut.o 00:20:35.594 CC lib/log/log_flags.o 00:20:35.594 CC lib/log/log.o 00:20:35.594 CC lib/log/log_deprecated.o 00:20:35.852 LIB libspdk_ut_mock.a 00:20:35.852 LIB libspdk_ut.a 00:20:35.852 LIB libspdk_log.a 00:20:36.110 CC lib/ioat/ioat.o 00:20:36.110 CC lib/util/base64.o 00:20:36.110 CC lib/util/bit_array.o 00:20:36.110 CC lib/dma/dma.o 00:20:36.110 CC lib/util/cpuset.o 00:20:36.110 CXX lib/trace_parser/trace.o 00:20:36.110 CC lib/util/crc16.o 00:20:36.110 CC lib/util/crc32.o 00:20:36.110 CC lib/util/crc32c.o 00:20:36.110 CC lib/vfio_user/host/vfio_user_pci.o 00:20:36.368 CC lib/util/crc32_ieee.o 00:20:36.368 CC lib/util/crc64.o 00:20:36.368 CC lib/util/dif.o 00:20:36.368 CC lib/util/fd.o 00:20:36.368 CC lib/util/file.o 00:20:36.368 LIB libspdk_dma.a 00:20:36.368 CC lib/util/hexlify.o 00:20:36.368 CC lib/vfio_user/host/vfio_user.o 00:20:36.368 CC lib/util/iov.o 00:20:36.368 CC lib/util/math.o 00:20:36.625 CC lib/util/pipe.o 00:20:36.625 CC lib/util/strerror_tls.o 00:20:36.625 CC lib/util/string.o 00:20:36.625 CC lib/util/uuid.o 00:20:36.625 LIB libspdk_ioat.a 00:20:36.625 CC lib/util/fd_group.o 00:20:36.625 LIB libspdk_vfio_user.a 00:20:36.625 CC lib/util/xor.o 00:20:36.625 CC lib/util/zipf.o 00:20:37.190 LIB libspdk_util.a 00:20:37.448 CC lib/vmd/vmd.o 00:20:37.448 CC lib/vmd/led.o 00:20:37.448 CC lib/conf/conf.o 00:20:37.448 CC lib/json/json_util.o 00:20:37.448 CC lib/idxd/idxd.o 00:20:37.448 CC lib/json/json_write.o 00:20:37.448 CC lib/rdma/common.o 00:20:37.448 CC lib/json/json_parse.o 00:20:37.448 CC lib/env_dpdk/env.o 00:20:37.448 LIB libspdk_trace_parser.a 00:20:37.448 CC lib/env_dpdk/memory.o 00:20:37.723 CC lib/env_dpdk/pci.o 00:20:37.723 LIB libspdk_conf.a 00:20:37.723 CC lib/rdma/rdma_verbs.o 00:20:37.723 CC lib/env_dpdk/init.o 00:20:37.723 CC lib/env_dpdk/threads.o 00:20:37.723 LIB libspdk_json.a 00:20:37.723 CC lib/env_dpdk/pci_ioat.o 00:20:37.981 CC lib/env_dpdk/pci_virtio.o 00:20:37.981 CC lib/jsonrpc/jsonrpc_server.o 00:20:37.981 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:20:37.981 LIB libspdk_rdma.a 00:20:37.981 CC lib/env_dpdk/pci_vmd.o 00:20:37.981 CC lib/env_dpdk/pci_idxd.o 00:20:38.239 CC lib/idxd/idxd_user.o 00:20:38.239 CC lib/env_dpdk/pci_event.o 00:20:38.239 CC lib/env_dpdk/sigbus_handler.o 00:20:38.239 CC lib/jsonrpc/jsonrpc_client.o 00:20:38.239 CC lib/env_dpdk/pci_dpdk.o 00:20:38.239 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:20:38.239 CC lib/idxd/idxd_kernel.o 00:20:38.239 LIB libspdk_vmd.a 00:20:38.239 CC lib/env_dpdk/pci_dpdk_2207.o 00:20:38.497 CC lib/env_dpdk/pci_dpdk_2211.o 00:20:38.497 LIB libspdk_idxd.a 00:20:38.497 LIB libspdk_jsonrpc.a 00:20:38.754 CC lib/rpc/rpc.o 00:20:39.011 LIB libspdk_rpc.a 00:20:39.011 CC lib/trace/trace.o 00:20:39.011 CC lib/trace/trace_flags.o 00:20:39.011 CC lib/trace/trace_rpc.o 00:20:39.011 CC lib/notify/notify.o 00:20:39.011 CC lib/notify/notify_rpc.o 00:20:39.011 CC lib/sock/sock_rpc.o 00:20:39.011 CC lib/sock/sock.o 00:20:39.268 LIB libspdk_notify.a 00:20:39.268 LIB libspdk_trace.a 00:20:39.268 LIB libspdk_env_dpdk.a 00:20:39.526 CC lib/thread/thread.o 00:20:39.526 CC lib/thread/iobuf.o 00:20:39.526 LIB libspdk_sock.a 00:20:39.783 CC lib/nvme/nvme_ctrlr_cmd.o 00:20:39.783 CC lib/nvme/nvme_ctrlr.o 00:20:39.783 CC lib/nvme/nvme_fabric.o 00:20:39.783 CC lib/nvme/nvme_ns_cmd.o 00:20:39.783 CC lib/nvme/nvme_ns.o 00:20:39.783 CC lib/nvme/nvme_pcie.o 00:20:39.783 CC lib/nvme/nvme_pcie_common.o 00:20:39.783 CC lib/nvme/nvme_qpair.o 00:20:40.041 CC lib/nvme/nvme.o 00:20:40.629 CC lib/nvme/nvme_quirks.o 00:20:40.887 CC lib/nvme/nvme_transport.o 00:20:40.887 CC lib/nvme/nvme_discovery.o 00:20:40.887 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:20:40.887 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:20:40.887 CC lib/nvme/nvme_tcp.o 00:20:41.145 CC lib/nvme/nvme_opal.o 00:20:41.145 CC lib/nvme/nvme_io_msg.o 00:20:41.403 CC lib/nvme/nvme_poll_group.o 00:20:41.660 CC lib/nvme/nvme_zns.o 00:20:41.660 CC lib/nvme/nvme_cuse.o 00:20:41.660 CC lib/nvme/nvme_vfio_user.o 00:20:41.660 LIB libspdk_thread.a 00:20:41.660 CC lib/nvme/nvme_rdma.o 00:20:41.918 CC lib/accel/accel.o 00:20:41.918 CC lib/blob/blobstore.o 00:20:41.918 CC lib/accel/accel_rpc.o 00:20:42.176 CC lib/blob/request.o 00:20:42.176 CC lib/blob/zeroes.o 00:20:42.434 CC lib/blob/blob_bs_dev.o 00:20:42.434 CC lib/init/json_config.o 00:20:42.434 CC lib/virtio/virtio.o 00:20:42.434 CC lib/virtio/virtio_vhost_user.o 00:20:42.692 CC lib/virtio/virtio_vfio_user.o 00:20:42.692 CC lib/virtio/virtio_pci.o 00:20:42.692 CC lib/init/subsystem.o 00:20:42.692 CC lib/init/subsystem_rpc.o 00:20:42.950 CC lib/init/rpc.o 00:20:42.950 CC lib/accel/accel_sw.o 00:20:43.266 LIB libspdk_init.a 00:20:43.266 LIB libspdk_virtio.a 00:20:43.266 CC lib/event/app.o 00:20:43.266 CC lib/event/reactor.o 00:20:43.266 CC lib/event/log_rpc.o 00:20:43.266 CC lib/event/scheduler_static.o 00:20:43.266 CC lib/event/app_rpc.o 00:20:43.266 LIB libspdk_accel.a 00:20:43.558 CC lib/bdev/bdev.o 00:20:43.559 CC lib/bdev/bdev_zone.o 00:20:43.559 CC lib/bdev/bdev_rpc.o 00:20:43.559 CC lib/bdev/part.o 00:20:43.559 CC lib/bdev/scsi_nvme.o 00:20:43.559 LIB libspdk_nvme.a 00:20:43.817 LIB libspdk_event.a 00:20:46.348 LIB libspdk_blob.a 00:20:46.348 CC lib/lvol/lvol.o 00:20:46.348 CC lib/blobfs/blobfs.o 00:20:46.348 CC lib/blobfs/tree.o 00:20:46.916 LIB libspdk_bdev.a 00:20:47.174 CC lib/nbd/nbd.o 00:20:47.174 CC lib/nbd/nbd_rpc.o 00:20:47.174 CC lib/ublk/ublk.o 00:20:47.174 CC lib/ublk/ublk_rpc.o 00:20:47.174 CC lib/ftl/ftl_core.o 00:20:47.174 CC lib/nvmf/ctrlr.o 00:20:47.174 CC lib/nvmf/ctrlr_discovery.o 00:20:47.174 CC lib/scsi/dev.o 00:20:47.433 LIB libspdk_lvol.a 00:20:47.433 CC lib/nvmf/ctrlr_bdev.o 00:20:47.433 CC lib/ftl/ftl_init.o 00:20:47.433 LIB libspdk_blobfs.a 00:20:47.433 CC lib/ftl/ftl_layout.o 00:20:47.433 CC lib/nvmf/subsystem.o 00:20:47.433 CC lib/scsi/lun.o 00:20:47.691 CC lib/ftl/ftl_debug.o 00:20:47.691 CC lib/ftl/ftl_io.o 00:20:47.691 LIB libspdk_nbd.a 00:20:47.950 CC lib/nvmf/nvmf.o 00:20:47.950 CC lib/nvmf/nvmf_rpc.o 00:20:47.950 CC lib/scsi/port.o 00:20:47.950 CC lib/scsi/scsi.o 00:20:47.950 CC lib/nvmf/transport.o 00:20:47.950 CC lib/ftl/ftl_sb.o 00:20:47.950 LIB libspdk_ublk.a 00:20:48.208 CC lib/nvmf/tcp.o 00:20:48.208 CC lib/nvmf/rdma.o 00:20:48.208 CC lib/scsi/scsi_bdev.o 00:20:48.208 CC lib/ftl/ftl_l2p.o 00:20:48.467 CC lib/scsi/scsi_pr.o 00:20:48.467 CC lib/ftl/ftl_l2p_flat.o 00:20:48.725 CC lib/ftl/ftl_nv_cache.o 00:20:48.725 CC lib/scsi/scsi_rpc.o 00:20:48.725 CC lib/scsi/task.o 00:20:48.983 CC lib/ftl/ftl_band.o 00:20:48.983 CC lib/ftl/ftl_band_ops.o 00:20:48.983 CC lib/ftl/ftl_writer.o 00:20:48.983 CC lib/ftl/ftl_rq.o 00:20:48.983 CC lib/ftl/ftl_reloc.o 00:20:48.983 LIB libspdk_scsi.a 00:20:49.241 CC lib/ftl/ftl_l2p_cache.o 00:20:49.241 CC lib/ftl/ftl_p2l.o 00:20:49.241 CC lib/ftl/mngt/ftl_mngt.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_startup.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_md.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_misc.o 00:20:49.500 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:20:49.758 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:20:49.758 CC lib/ftl/mngt/ftl_mngt_band.o 00:20:49.758 CC lib/iscsi/conn.o 00:20:49.758 CC lib/iscsi/init_grp.o 00:20:50.017 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:20:50.017 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:20:50.017 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:20:50.017 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:20:50.017 CC lib/ftl/utils/ftl_conf.o 00:20:50.017 CC lib/ftl/utils/ftl_md.o 00:20:50.017 CC lib/iscsi/iscsi.o 00:20:50.275 CC lib/ftl/utils/ftl_mempool.o 00:20:50.275 CC lib/ftl/utils/ftl_bitmap.o 00:20:50.275 CC lib/iscsi/md5.o 00:20:50.275 CC lib/iscsi/param.o 00:20:50.275 CC lib/ftl/utils/ftl_property.o 00:20:50.275 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:20:50.275 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:20:50.534 CC lib/iscsi/portal_grp.o 00:20:50.534 CC lib/iscsi/tgt_node.o 00:20:50.534 CC lib/iscsi/iscsi_subsystem.o 00:20:50.534 CC lib/vhost/vhost.o 00:20:50.534 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:20:50.792 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:20:50.792 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:20:50.792 CC lib/iscsi/iscsi_rpc.o 00:20:50.792 CC lib/iscsi/task.o 00:20:50.792 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:20:50.792 CC lib/ftl/upgrade/ftl_sb_v3.o 00:20:51.052 CC lib/vhost/vhost_rpc.o 00:20:51.052 LIB libspdk_nvmf.a 00:20:51.052 CC lib/vhost/vhost_scsi.o 00:20:51.052 CC lib/vhost/vhost_blk.o 00:20:51.052 CC lib/vhost/rte_vhost_user.o 00:20:51.052 CC lib/ftl/upgrade/ftl_sb_v5.o 00:20:51.052 CC lib/ftl/nvc/ftl_nvc_dev.o 00:20:51.052 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:20:51.310 CC lib/ftl/base/ftl_base_dev.o 00:20:51.311 CC lib/ftl/base/ftl_base_bdev.o 00:20:51.311 CC lib/ftl/ftl_trace.o 00:20:51.569 LIB libspdk_ftl.a 00:20:52.135 LIB libspdk_iscsi.a 00:20:52.394 LIB libspdk_vhost.a 00:20:52.652 CC module/env_dpdk/env_dpdk_rpc.o 00:20:52.652 CC module/scheduler/gscheduler/gscheduler.o 00:20:52.652 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:20:52.652 CC module/sock/posix/posix.o 00:20:52.652 CC module/blob/bdev/blob_bdev.o 00:20:52.652 CC module/accel/ioat/accel_ioat.o 00:20:52.652 CC module/scheduler/dynamic/scheduler_dynamic.o 00:20:52.652 CC module/accel/error/accel_error.o 00:20:52.652 CC module/accel/iaa/accel_iaa.o 00:20:52.652 CC module/accel/dsa/accel_dsa.o 00:20:52.911 LIB libspdk_scheduler_dpdk_governor.a 00:20:52.911 LIB libspdk_scheduler_gscheduler.a 00:20:52.911 LIB libspdk_env_dpdk_rpc.a 00:20:52.911 CC module/accel/dsa/accel_dsa_rpc.o 00:20:52.911 CC module/accel/ioat/accel_ioat_rpc.o 00:20:52.911 CC module/accel/error/accel_error_rpc.o 00:20:52.911 CC module/accel/iaa/accel_iaa_rpc.o 00:20:52.911 LIB libspdk_scheduler_dynamic.a 00:20:53.169 LIB libspdk_blob_bdev.a 00:20:53.169 LIB libspdk_accel_ioat.a 00:20:53.169 LIB libspdk_accel_error.a 00:20:53.169 LIB libspdk_accel_iaa.a 00:20:53.169 LIB libspdk_accel_dsa.a 00:20:53.169 CC module/blobfs/bdev/blobfs_bdev.o 00:20:53.169 CC module/bdev/passthru/vbdev_passthru.o 00:20:53.169 CC module/bdev/gpt/gpt.o 00:20:53.169 CC module/bdev/malloc/bdev_malloc.o 00:20:53.169 CC module/bdev/delay/vbdev_delay.o 00:20:53.169 CC module/bdev/error/vbdev_error.o 00:20:53.169 CC module/bdev/nvme/bdev_nvme.o 00:20:53.169 CC module/bdev/null/bdev_null.o 00:20:53.169 CC module/bdev/lvol/vbdev_lvol.o 00:20:53.429 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:20:53.429 CC module/bdev/gpt/vbdev_gpt.o 00:20:53.687 CC module/bdev/null/bdev_null_rpc.o 00:20:53.687 CC module/bdev/error/vbdev_error_rpc.o 00:20:53.687 LIB libspdk_blobfs_bdev.a 00:20:53.687 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:20:53.687 CC module/bdev/delay/vbdev_delay_rpc.o 00:20:53.687 LIB libspdk_sock_posix.a 00:20:53.687 CC module/bdev/nvme/bdev_nvme_rpc.o 00:20:53.687 CC module/bdev/malloc/bdev_malloc_rpc.o 00:20:53.945 LIB libspdk_bdev_gpt.a 00:20:53.945 LIB libspdk_bdev_null.a 00:20:53.945 CC module/bdev/raid/bdev_raid.o 00:20:53.945 LIB libspdk_bdev_error.a 00:20:53.945 CC module/bdev/raid/bdev_raid_rpc.o 00:20:53.945 CC module/bdev/raid/bdev_raid_sb.o 00:20:53.945 LIB libspdk_bdev_passthru.a 00:20:53.945 LIB libspdk_bdev_delay.a 00:20:53.945 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:20:53.945 CC module/bdev/raid/raid0.o 00:20:53.945 LIB libspdk_bdev_malloc.a 00:20:53.945 CC module/bdev/split/vbdev_split.o 00:20:53.945 CC module/bdev/zone_block/vbdev_zone_block.o 00:20:53.945 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:20:54.216 CC module/bdev/aio/bdev_aio.o 00:20:54.216 CC module/bdev/ftl/bdev_ftl.o 00:20:54.494 CC module/bdev/split/vbdev_split_rpc.o 00:20:54.494 LIB libspdk_bdev_lvol.a 00:20:54.494 CC module/bdev/iscsi/bdev_iscsi.o 00:20:54.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:20:54.494 CC module/bdev/ftl/bdev_ftl_rpc.o 00:20:54.494 LIB libspdk_bdev_zone_block.a 00:20:54.494 CC module/bdev/raid/raid1.o 00:20:54.494 LIB libspdk_bdev_split.a 00:20:54.494 CC module/bdev/raid/concat.o 00:20:54.494 CC module/bdev/aio/bdev_aio_rpc.o 00:20:54.752 CC module/bdev/raid/raid5f.o 00:20:54.752 LIB libspdk_bdev_ftl.a 00:20:54.752 CC module/bdev/virtio/bdev_virtio_blk.o 00:20:54.752 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:20:54.752 LIB libspdk_bdev_aio.a 00:20:54.752 CC module/bdev/nvme/nvme_rpc.o 00:20:54.752 CC module/bdev/nvme/bdev_mdns_client.o 00:20:54.752 CC module/bdev/virtio/bdev_virtio_rpc.o 00:20:54.752 CC module/bdev/nvme/vbdev_opal.o 00:20:55.010 LIB libspdk_bdev_iscsi.a 00:20:55.010 CC module/bdev/nvme/vbdev_opal_rpc.o 00:20:55.010 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:20:55.010 LIB libspdk_bdev_virtio.a 00:20:55.269 LIB libspdk_bdev_raid.a 00:20:56.644 LIB libspdk_bdev_nvme.a 00:20:56.902 CC module/event/subsystems/scheduler/scheduler.o 00:20:56.902 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:20:56.902 CC module/event/subsystems/sock/sock.o 00:20:56.902 CC module/event/subsystems/iobuf/iobuf.o 00:20:56.902 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:20:56.902 CC module/event/subsystems/vmd/vmd.o 00:20:56.902 CC module/event/subsystems/vmd/vmd_rpc.o 00:20:57.160 LIB libspdk_event_vhost_blk.a 00:20:57.160 LIB libspdk_event_sock.a 00:20:57.160 LIB libspdk_event_vmd.a 00:20:57.160 LIB libspdk_event_scheduler.a 00:20:57.160 LIB libspdk_event_iobuf.a 00:20:57.418 CC module/event/subsystems/accel/accel.o 00:20:57.418 LIB libspdk_event_accel.a 00:20:57.676 CC module/event/subsystems/bdev/bdev.o 00:20:57.934 LIB libspdk_event_bdev.a 00:20:58.192 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:20:58.192 CC module/event/subsystems/scsi/scsi.o 00:20:58.192 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:20:58.192 CC module/event/subsystems/ublk/ublk.o 00:20:58.192 CC module/event/subsystems/nbd/nbd.o 00:20:58.192 LIB libspdk_event_ublk.a 00:20:58.192 LIB libspdk_event_scsi.a 00:20:58.192 LIB libspdk_event_nbd.a 00:20:58.450 LIB libspdk_event_nvmf.a 00:20:58.450 CC module/event/subsystems/iscsi/iscsi.o 00:20:58.450 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:20:58.708 LIB libspdk_event_vhost_scsi.a 00:20:58.709 LIB libspdk_event_iscsi.a 00:20:58.967 CXX app/trace/trace.o 00:20:58.967 CC app/trace_record/trace_record.o 00:20:58.967 CC examples/ioat/perf/perf.o 00:20:58.967 CC examples/vmd/lsvmd/lsvmd.o 00:20:58.967 CC examples/sock/hello_world/hello_sock.o 00:20:58.967 CC examples/accel/perf/accel_perf.o 00:20:58.967 CC examples/nvme/hello_world/hello_world.o 00:20:58.967 CC test/accel/dif/dif.o 00:20:58.967 CC examples/bdev/hello_world/hello_bdev.o 00:20:58.967 CC examples/blob/hello_world/hello_blob.o 00:20:58.967 LINK lsvmd 00:20:59.225 LINK ioat_perf 00:20:59.225 LINK spdk_trace_record 00:20:59.225 LINK hello_world 00:20:59.225 LINK hello_bdev 00:20:59.226 LINK hello_blob 00:20:59.226 LINK hello_sock 00:20:59.494 LINK spdk_trace 00:20:59.494 LINK dif 00:20:59.494 LINK accel_perf 00:20:59.752 CC examples/ioat/verify/verify.o 00:20:59.752 CC app/nvmf_tgt/nvmf_main.o 00:20:59.752 CC examples/nvme/reconnect/reconnect.o 00:21:00.010 LINK nvmf_tgt 00:21:00.010 LINK verify 00:21:00.010 CC app/iscsi_tgt/iscsi_tgt.o 00:21:00.010 CC examples/vmd/led/led.o 00:21:00.269 CC app/spdk_tgt/spdk_tgt.o 00:21:00.269 LINK led 00:21:00.269 LINK reconnect 00:21:00.269 LINK iscsi_tgt 00:21:00.269 CC examples/nvmf/nvmf/nvmf.o 00:21:00.527 LINK spdk_tgt 00:21:00.785 CC examples/util/zipf/zipf.o 00:21:00.785 LINK nvmf 00:21:01.044 LINK zipf 00:21:01.302 CC examples/bdev/bdevperf/bdevperf.o 00:21:01.560 CC examples/nvme/nvme_manage/nvme_manage.o 00:21:01.560 CC examples/thread/thread/thread_ex.o 00:21:01.560 CC examples/blob/cli/blobcli.o 00:21:01.560 CC examples/idxd/perf/perf.o 00:21:01.818 LINK thread 00:21:01.818 CC test/app/bdev_svc/bdev_svc.o 00:21:02.102 LINK idxd_perf 00:21:02.102 LINK bdev_svc 00:21:02.102 CC test/bdev/bdevio/bdevio.o 00:21:02.102 LINK nvme_manage 00:21:02.102 LINK blobcli 00:21:02.360 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:21:02.360 LINK bdevperf 00:21:02.618 LINK bdevio 00:21:02.876 CC test/app/histogram_perf/histogram_perf.o 00:21:02.876 CC app/spdk_lspci/spdk_lspci.o 00:21:02.876 LINK nvme_fuzz 00:21:02.876 LINK histogram_perf 00:21:03.140 LINK spdk_lspci 00:21:03.140 CC examples/nvme/arbitration/arbitration.o 00:21:03.140 CC examples/nvme/hotplug/hotplug.o 00:21:03.399 LINK hotplug 00:21:03.399 LINK arbitration 00:21:03.657 CC test/app/jsoncat/jsoncat.o 00:21:03.927 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:21:03.927 LINK jsoncat 00:21:04.188 CC test/app/stub/stub.o 00:21:04.188 CC app/spdk_nvme_perf/perf.o 00:21:04.188 CC app/spdk_nvme_identify/identify.o 00:21:04.188 CC app/spdk_nvme_discover/discovery_aer.o 00:21:04.188 CC app/spdk_top/spdk_top.o 00:21:04.447 LINK stub 00:21:04.447 LINK spdk_nvme_discover 00:21:04.447 CC app/vhost/vhost.o 00:21:04.708 CC examples/nvme/cmb_copy/cmb_copy.o 00:21:04.708 CC app/spdk_dd/spdk_dd.o 00:21:04.708 LINK vhost 00:21:04.981 LINK cmb_copy 00:21:04.981 CC test/blobfs/mkfs/mkfs.o 00:21:04.981 LINK mkfs 00:21:05.239 LINK spdk_dd 00:21:05.239 LINK spdk_nvme_identify 00:21:05.497 CC examples/nvme/abort/abort.o 00:21:05.497 CC app/fio/nvme/fio_plugin.o 00:21:05.497 LINK spdk_nvme_perf 00:21:05.497 LINK spdk_top 00:21:05.756 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:21:06.015 LINK pmr_persistence 00:21:06.015 LINK abort 00:21:06.285 TEST_HEADER include/spdk/accel.h 00:21:06.285 TEST_HEADER include/spdk/accel_module.h 00:21:06.285 TEST_HEADER include/spdk/assert.h 00:21:06.285 TEST_HEADER include/spdk/barrier.h 00:21:06.285 TEST_HEADER include/spdk/base64.h 00:21:06.285 TEST_HEADER include/spdk/bdev.h 00:21:06.285 TEST_HEADER include/spdk/bdev_module.h 00:21:06.285 TEST_HEADER include/spdk/bdev_zone.h 00:21:06.285 TEST_HEADER include/spdk/bit_array.h 00:21:06.285 TEST_HEADER include/spdk/bit_pool.h 00:21:06.285 TEST_HEADER include/spdk/blob.h 00:21:06.285 TEST_HEADER include/spdk/blob_bdev.h 00:21:06.285 TEST_HEADER include/spdk/blobfs.h 00:21:06.285 TEST_HEADER include/spdk/blobfs_bdev.h 00:21:06.285 TEST_HEADER include/spdk/conf.h 00:21:06.285 TEST_HEADER include/spdk/config.h 00:21:06.285 TEST_HEADER include/spdk/cpuset.h 00:21:06.285 LINK iscsi_fuzz 00:21:06.285 TEST_HEADER include/spdk/crc16.h 00:21:06.285 TEST_HEADER include/spdk/crc32.h 00:21:06.285 TEST_HEADER include/spdk/crc64.h 00:21:06.285 TEST_HEADER include/spdk/dif.h 00:21:06.285 TEST_HEADER include/spdk/dma.h 00:21:06.285 TEST_HEADER include/spdk/endian.h 00:21:06.285 TEST_HEADER include/spdk/env.h 00:21:06.285 TEST_HEADER include/spdk/env_dpdk.h 00:21:06.285 TEST_HEADER include/spdk/event.h 00:21:06.285 TEST_HEADER include/spdk/fd.h 00:21:06.285 TEST_HEADER include/spdk/fd_group.h 00:21:06.285 TEST_HEADER include/spdk/file.h 00:21:06.285 TEST_HEADER include/spdk/ftl.h 00:21:06.285 TEST_HEADER include/spdk/gpt_spec.h 00:21:06.285 TEST_HEADER include/spdk/hexlify.h 00:21:06.285 TEST_HEADER include/spdk/histogram_data.h 00:21:06.285 TEST_HEADER include/spdk/idxd.h 00:21:06.285 TEST_HEADER include/spdk/idxd_spec.h 00:21:06.285 TEST_HEADER include/spdk/init.h 00:21:06.285 TEST_HEADER include/spdk/ioat.h 00:21:06.285 TEST_HEADER include/spdk/ioat_spec.h 00:21:06.285 TEST_HEADER include/spdk/iscsi_spec.h 00:21:06.285 CC test/dma/test_dma/test_dma.o 00:21:06.285 TEST_HEADER include/spdk/json.h 00:21:06.285 TEST_HEADER include/spdk/jsonrpc.h 00:21:06.285 TEST_HEADER include/spdk/likely.h 00:21:06.285 TEST_HEADER include/spdk/log.h 00:21:06.285 TEST_HEADER include/spdk/lvol.h 00:21:06.285 TEST_HEADER include/spdk/memory.h 00:21:06.285 TEST_HEADER include/spdk/mmio.h 00:21:06.285 TEST_HEADER include/spdk/nbd.h 00:21:06.285 LINK spdk_nvme 00:21:06.285 TEST_HEADER include/spdk/notify.h 00:21:06.285 TEST_HEADER include/spdk/nvme.h 00:21:06.285 TEST_HEADER include/spdk/nvme_intel.h 00:21:06.285 TEST_HEADER include/spdk/nvme_ocssd.h 00:21:06.285 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:21:06.285 TEST_HEADER include/spdk/nvme_spec.h 00:21:06.285 TEST_HEADER include/spdk/nvme_zns.h 00:21:06.285 TEST_HEADER include/spdk/nvmf.h 00:21:06.285 TEST_HEADER include/spdk/nvmf_cmd.h 00:21:06.285 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:21:06.285 TEST_HEADER include/spdk/nvmf_spec.h 00:21:06.285 TEST_HEADER include/spdk/nvmf_transport.h 00:21:06.285 TEST_HEADER include/spdk/opal.h 00:21:06.285 TEST_HEADER include/spdk/opal_spec.h 00:21:06.285 TEST_HEADER include/spdk/pci_ids.h 00:21:06.285 TEST_HEADER include/spdk/pipe.h 00:21:06.285 TEST_HEADER include/spdk/queue.h 00:21:06.285 TEST_HEADER include/spdk/reduce.h 00:21:06.285 TEST_HEADER include/spdk/rpc.h 00:21:06.286 TEST_HEADER include/spdk/scheduler.h 00:21:06.286 TEST_HEADER include/spdk/scsi.h 00:21:06.286 TEST_HEADER include/spdk/scsi_spec.h 00:21:06.286 TEST_HEADER include/spdk/sock.h 00:21:06.286 TEST_HEADER include/spdk/stdinc.h 00:21:06.286 TEST_HEADER include/spdk/string.h 00:21:06.286 TEST_HEADER include/spdk/thread.h 00:21:06.286 TEST_HEADER include/spdk/trace.h 00:21:06.286 TEST_HEADER include/spdk/trace_parser.h 00:21:06.286 TEST_HEADER include/spdk/tree.h 00:21:06.286 TEST_HEADER include/spdk/ublk.h 00:21:06.286 TEST_HEADER include/spdk/util.h 00:21:06.286 TEST_HEADER include/spdk/uuid.h 00:21:06.286 TEST_HEADER include/spdk/version.h 00:21:06.543 TEST_HEADER include/spdk/vfio_user_pci.h 00:21:06.543 TEST_HEADER include/spdk/vfio_user_spec.h 00:21:06.543 TEST_HEADER include/spdk/vhost.h 00:21:06.543 TEST_HEADER include/spdk/vmd.h 00:21:06.543 TEST_HEADER include/spdk/xor.h 00:21:06.543 TEST_HEADER include/spdk/zipf.h 00:21:06.543 CXX test/cpp_headers/accel.o 00:21:06.543 CC test/env/mem_callbacks/mem_callbacks.o 00:21:06.801 CXX test/cpp_headers/accel_module.o 00:21:06.801 LINK test_dma 00:21:06.801 CC test/env/vtophys/vtophys.o 00:21:07.059 CXX test/cpp_headers/assert.o 00:21:07.059 LINK vtophys 00:21:07.059 CXX test/cpp_headers/barrier.o 00:21:07.317 CXX test/cpp_headers/base64.o 00:21:07.317 CC examples/interrupt_tgt/interrupt_tgt.o 00:21:07.317 LINK mem_callbacks 00:21:07.317 CC test/event/event_perf/event_perf.o 00:21:07.317 CC test/event/reactor/reactor.o 00:21:07.317 CC test/event/reactor_perf/reactor_perf.o 00:21:07.317 CXX test/cpp_headers/bdev.o 00:21:07.317 CC app/fio/bdev/fio_plugin.o 00:21:07.574 LINK interrupt_tgt 00:21:07.574 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:21:07.574 LINK event_perf 00:21:07.574 LINK reactor 00:21:07.574 LINK reactor_perf 00:21:07.574 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:21:07.574 CXX test/cpp_headers/bdev_module.o 00:21:07.574 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:21:07.832 CC test/env/memory/memory_ut.o 00:21:07.832 LINK env_dpdk_post_init 00:21:07.832 CXX test/cpp_headers/bdev_zone.o 00:21:08.090 LINK spdk_bdev 00:21:08.090 LINK vhost_fuzz 00:21:08.348 CXX test/cpp_headers/bit_array.o 00:21:08.348 CC test/event/app_repeat/app_repeat.o 00:21:08.348 CXX test/cpp_headers/bit_pool.o 00:21:08.348 CC test/event/scheduler/scheduler.o 00:21:08.607 LINK app_repeat 00:21:08.607 CXX test/cpp_headers/blob.o 00:21:08.607 CC test/lvol/esnap/esnap.o 00:21:08.607 CXX test/cpp_headers/blob_bdev.o 00:21:08.607 CXX test/cpp_headers/blobfs.o 00:21:08.880 LINK scheduler 00:21:08.880 CC test/nvme/aer/aer.o 00:21:08.880 CC test/nvme/reset/reset.o 00:21:08.880 LINK memory_ut 00:21:08.880 CXX test/cpp_headers/blobfs_bdev.o 00:21:08.880 CXX test/cpp_headers/conf.o 00:21:08.880 CC test/nvme/sgl/sgl.o 00:21:09.146 CXX test/cpp_headers/config.o 00:21:09.146 CC test/env/pci/pci_ut.o 00:21:09.146 CXX test/cpp_headers/cpuset.o 00:21:09.146 CXX test/cpp_headers/crc16.o 00:21:09.146 LINK reset 00:21:09.403 LINK aer 00:21:09.403 LINK sgl 00:21:09.403 CXX test/cpp_headers/crc32.o 00:21:09.403 CC test/nvme/e2edp/nvme_dp.o 00:21:09.661 CXX test/cpp_headers/crc64.o 00:21:09.661 CC test/nvme/overhead/overhead.o 00:21:09.661 LINK pci_ut 00:21:09.919 CXX test/cpp_headers/dif.o 00:21:10.177 CXX test/cpp_headers/dma.o 00:21:10.177 LINK nvme_dp 00:21:10.177 LINK overhead 00:21:10.177 CC test/nvme/err_injection/err_injection.o 00:21:10.177 CXX test/cpp_headers/endian.o 00:21:10.434 CXX test/cpp_headers/env.o 00:21:10.434 LINK err_injection 00:21:10.434 CC test/nvme/startup/startup.o 00:21:10.434 CC test/nvme/reserve/reserve.o 00:21:10.434 CC test/rpc_client/rpc_client_test.o 00:21:10.434 CC test/nvme/simple_copy/simple_copy.o 00:21:10.434 CXX test/cpp_headers/env_dpdk.o 00:21:10.692 LINK startup 00:21:10.692 LINK rpc_client_test 00:21:10.692 LINK reserve 00:21:10.692 CXX test/cpp_headers/event.o 00:21:10.950 CC test/thread/poller_perf/poller_perf.o 00:21:10.950 LINK simple_copy 00:21:10.950 CC test/nvme/connect_stress/connect_stress.o 00:21:10.950 CC test/nvme/boot_partition/boot_partition.o 00:21:10.950 CXX test/cpp_headers/fd.o 00:21:11.209 CXX test/cpp_headers/fd_group.o 00:21:11.209 LINK poller_perf 00:21:11.468 LINK boot_partition 00:21:11.468 LINK connect_stress 00:21:11.468 CC test/nvme/compliance/nvme_compliance.o 00:21:11.468 CC test/nvme/fused_ordering/fused_ordering.o 00:21:11.468 CXX test/cpp_headers/file.o 00:21:11.468 CXX test/cpp_headers/ftl.o 00:21:11.726 LINK fused_ordering 00:21:11.726 CC test/nvme/doorbell_aers/doorbell_aers.o 00:21:11.726 CC test/nvme/fdp/fdp.o 00:21:11.726 CXX test/cpp_headers/gpt_spec.o 00:21:11.984 CXX test/cpp_headers/hexlify.o 00:21:11.984 LINK nvme_compliance 00:21:11.984 LINK doorbell_aers 00:21:11.984 CXX test/cpp_headers/histogram_data.o 00:21:11.984 LINK fdp 00:21:12.241 CC test/thread/lock/spdk_lock.o 00:21:12.241 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:21:12.241 CXX test/cpp_headers/idxd.o 00:21:12.500 CC test/nvme/cuse/cuse.o 00:21:12.500 LINK histogram_ut 00:21:12.500 CXX test/cpp_headers/idxd_spec.o 00:21:12.500 CC test/unit/lib/accel/accel.c/accel_ut.o 00:21:12.500 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:21:12.757 CXX test/cpp_headers/init.o 00:21:12.757 CC test/unit/lib/bdev/part.c/part_ut.o 00:21:12.757 CXX test/cpp_headers/ioat.o 00:21:13.015 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:21:13.016 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:21:13.016 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:21:13.273 CXX test/cpp_headers/ioat_spec.o 00:21:13.273 LINK tree_ut 00:21:13.273 CXX test/cpp_headers/iscsi_spec.o 00:21:13.273 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:21:13.531 CXX test/cpp_headers/json.o 00:21:13.531 LINK blob_bdev_ut 00:21:13.789 LINK cuse 00:21:13.789 CXX test/cpp_headers/jsonrpc.o 00:21:14.047 CXX test/cpp_headers/likely.o 00:21:14.047 CC test/unit/lib/blob/blob.c/blob_ut.o 00:21:14.047 CC test/unit/lib/dma/dma.c/dma_ut.o 00:21:14.047 CXX test/cpp_headers/log.o 00:21:14.305 CXX test/cpp_headers/lvol.o 00:21:14.305 LINK spdk_lock 00:21:14.563 LINK dma_ut 00:21:14.563 CXX test/cpp_headers/memory.o 00:21:14.563 CXX test/cpp_headers/mmio.o 00:21:14.821 LINK blobfs_async_ut 00:21:14.821 CC test/unit/lib/event/app.c/app_ut.o 00:21:14.821 LINK blobfs_sync_ut 00:21:14.821 CXX test/cpp_headers/nbd.o 00:21:14.821 CXX test/cpp_headers/notify.o 00:21:15.079 CXX test/cpp_headers/nvme.o 00:21:15.079 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:21:15.079 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:21:15.337 CXX test/cpp_headers/nvme_intel.o 00:21:15.338 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:21:15.338 LINK scsi_nvme_ut 00:21:15.622 CXX test/cpp_headers/nvme_ocssd.o 00:21:15.622 LINK blobfs_bdev_ut 00:21:15.622 LINK app_ut 00:21:15.622 LINK accel_ut 00:21:15.622 LINK esnap 00:21:15.622 LINK ioat_ut 00:21:15.908 CXX test/cpp_headers/nvme_ocssd_spec.o 00:21:15.908 CXX test/cpp_headers/nvme_spec.o 00:21:15.908 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:21:15.908 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:21:15.908 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:21:15.908 CXX test/cpp_headers/nvme_zns.o 00:21:16.166 CXX test/cpp_headers/nvmf.o 00:21:16.166 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:21:16.166 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:21:16.166 CXX test/cpp_headers/nvmf_cmd.o 00:21:16.166 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:21:16.425 LINK gpt_ut 00:21:16.425 CXX test/cpp_headers/nvmf_fc_spec.o 00:21:16.683 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:21:16.683 CXX test/cpp_headers/nvmf_spec.o 00:21:16.942 LINK reactor_ut 00:21:16.942 LINK json_util_ut 00:21:16.942 CXX test/cpp_headers/nvmf_transport.o 00:21:17.200 LINK part_ut 00:21:17.200 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:21:17.200 CXX test/cpp_headers/opal.o 00:21:17.200 LINK json_write_ut 00:21:17.200 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:21:17.200 LINK conn_ut 00:21:17.459 CC test/unit/lib/iscsi/param.c/param_ut.o 00:21:17.459 CXX test/cpp_headers/opal_spec.o 00:21:17.716 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:21:17.717 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:21:17.717 LINK init_grp_ut 00:21:17.717 CXX test/cpp_headers/pci_ids.o 00:21:17.975 LINK vbdev_lvol_ut 00:21:17.975 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:21:17.975 CXX test/cpp_headers/pipe.o 00:21:17.975 LINK param_ut 00:21:18.234 CXX test/cpp_headers/queue.o 00:21:18.234 CXX test/cpp_headers/reduce.o 00:21:18.234 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:21:18.234 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:21:18.492 LINK portal_grp_ut 00:21:18.492 CXX test/cpp_headers/rpc.o 00:21:18.750 CXX test/cpp_headers/scheduler.o 00:21:18.750 LINK bdev_zone_ut 00:21:18.750 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:21:19.007 CXX test/cpp_headers/scsi.o 00:21:19.007 LINK tgt_node_ut 00:21:19.007 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:21:19.007 CXX test/cpp_headers/scsi_spec.o 00:21:19.265 LINK jsonrpc_server_ut 00:21:19.265 LINK json_parse_ut 00:21:19.265 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:21:19.265 CXX test/cpp_headers/sock.o 00:21:19.523 LINK bdev_ut 00:21:19.523 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:21:19.523 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:21:19.523 CXX test/cpp_headers/stdinc.o 00:21:19.780 CXX test/cpp_headers/string.o 00:21:19.780 LINK bdev_raid_sb_ut 00:21:19.780 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:21:20.039 LINK vbdev_zone_block_ut 00:21:20.039 CXX test/cpp_headers/thread.o 00:21:20.039 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:21:20.039 LINK raid1_ut 00:21:20.039 CXX test/cpp_headers/trace.o 00:21:20.039 LINK iscsi_ut 00:21:20.039 LINK concat_ut 00:21:20.298 CC test/unit/lib/log/log.c/log_ut.o 00:21:20.298 CXX test/cpp_headers/trace_parser.o 00:21:20.298 CXX test/cpp_headers/tree.o 00:21:20.298 CXX test/cpp_headers/ublk.o 00:21:20.298 CXX test/cpp_headers/util.o 00:21:20.556 LINK log_ut 00:21:20.556 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:21:20.556 CXX test/cpp_headers/uuid.o 00:21:20.556 CC test/unit/lib/notify/notify.c/notify_ut.o 00:21:20.556 CXX test/cpp_headers/version.o 00:21:20.814 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:21:20.814 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:21:20.814 CXX test/cpp_headers/vfio_user_pci.o 00:21:20.814 LINK bdev_raid_ut 00:21:21.072 LINK notify_ut 00:21:21.072 CXX test/cpp_headers/vfio_user_spec.o 00:21:21.072 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:21:21.350 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:21:21.350 CXX test/cpp_headers/vhost.o 00:21:21.350 LINK raid5f_ut 00:21:21.634 CXX test/cpp_headers/vmd.o 00:21:21.634 CXX test/cpp_headers/xor.o 00:21:21.634 CXX test/cpp_headers/zipf.o 00:21:21.634 LINK dev_ut 00:21:21.893 CC test/unit/lib/sock/sock.c/sock_ut.o 00:21:21.893 CC test/unit/lib/thread/thread.c/thread_ut.o 00:21:21.893 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:21:22.459 LINK bdev_ut 00:21:22.459 LINK nvme_ut 00:21:22.717 LINK lvol_ut 00:21:22.717 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:21:22.717 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:21:22.976 LINK lun_ut 00:21:22.976 CC test/unit/lib/util/base64.c/base64_ut.o 00:21:23.234 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:21:23.234 LINK base64_ut 00:21:23.234 LINK scsi_ut 00:21:23.493 LINK blob_ut 00:21:23.493 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:21:23.493 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:21:23.493 LINK iobuf_ut 00:21:23.752 LINK sock_ut 00:21:23.752 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:21:24.010 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:21:24.010 LINK bit_array_ut 00:21:24.010 CC test/unit/lib/sock/posix.c/posix_ut.o 00:21:24.010 LINK cpuset_ut 00:21:24.010 LINK crc16_ut 00:21:24.269 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:21:24.269 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:21:24.269 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:21:24.527 LINK crc32_ieee_ut 00:21:24.527 LINK thread_ut 00:21:24.527 LINK pci_event_ut 00:21:24.527 LINK nvme_ctrlr_cmd_ut 00:21:24.786 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:21:24.786 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:21:24.786 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:21:24.786 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:21:25.044 LINK crc32c_ut 00:21:25.044 LINK subsystem_ut 00:21:25.044 LINK scsi_bdev_ut 00:21:25.044 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:21:25.303 LINK posix_ut 00:21:25.303 CC test/unit/lib/util/dif.c/dif_ut.o 00:21:25.303 LINK nvme_ctrlr_ut 00:21:25.303 LINK crc64_ut 00:21:25.303 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:21:25.562 LINK idxd_user_ut 00:21:25.562 LINK rpc_ut 00:21:25.562 CC test/unit/lib/util/iov.c/iov_ut.o 00:21:25.562 LINK bdev_nvme_ut 00:21:25.562 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:21:25.820 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:21:25.820 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:21:25.820 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:21:25.820 LINK tcp_ut 00:21:25.820 LINK iov_ut 00:21:25.820 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:21:26.078 LINK scsi_pr_ut 00:21:26.078 LINK nvme_ctrlr_ocssd_cmd_ut 00:21:26.078 CC test/unit/lib/util/math.c/math_ut.o 00:21:26.337 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:21:26.337 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:21:26.337 LINK math_ut 00:21:26.337 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:21:26.595 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:21:26.853 LINK dif_ut 00:21:26.853 LINK idxd_ut 00:21:26.853 LINK nvme_ns_ut 00:21:27.112 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:21:27.112 CC test/unit/lib/util/string.c/string_ut.o 00:21:27.112 CC test/unit/lib/util/xor.c/xor_ut.o 00:21:27.370 LINK xor_ut 00:21:27.370 LINK string_ut 00:21:27.629 LINK ctrlr_bdev_ut 00:21:27.629 LINK pipe_ut 00:21:27.629 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:21:27.629 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:21:27.629 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:21:27.888 LINK nvmf_ut 00:21:27.888 LINK nvme_ns_ocssd_cmd_ut 00:21:27.888 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:21:28.145 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:21:28.145 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:21:28.145 LINK nvme_ns_cmd_ut 00:21:28.404 LINK ctrlr_discovery_ut 00:21:28.404 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:21:28.670 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:21:28.670 LINK subsystem_ut 00:21:28.670 LINK nvme_quirks_ut 00:21:28.929 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:21:28.929 LINK nvme_poll_group_ut 00:21:28.929 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:21:29.186 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:21:29.186 LINK nvme_qpair_ut 00:21:29.186 LINK ctrlr_ut 00:21:29.445 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:21:29.714 LINK nvme_transport_ut 00:21:29.714 CC test/unit/lib/rdma/common.c/common_ut.o 00:21:29.714 LINK nvme_pcie_ut 00:21:29.972 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:21:29.972 LINK nvme_io_msg_ut 00:21:30.230 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:21:30.230 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:21:30.230 LINK common_ut 00:21:30.230 LINK vhost_ut 00:21:30.488 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:21:30.746 LINK nvme_fabric_ut 00:21:30.746 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:21:30.746 LINK nvme_opal_ut 00:21:31.004 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:21:31.004 LINK ftl_l2p_ut 00:21:31.004 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:21:31.004 LINK nvme_pcie_common_ut 00:21:31.004 LINK nvme_tcp_ut 00:21:31.261 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:21:31.261 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:21:31.261 LINK ftl_bitmap_ut 00:21:31.261 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:21:31.519 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:21:31.519 LINK ftl_mempool_ut 00:21:31.519 LINK ftl_io_ut 00:21:32.087 LINK ftl_mngt_ut 00:21:32.087 LINK nvme_cuse_ut 00:21:32.087 LINK ftl_band_ut 00:21:32.657 LINK nvme_rdma_ut 00:21:32.657 LINK transport_ut 00:21:32.915 LINK rdma_ut 00:21:32.915 LINK ftl_layout_upgrade_ut 00:21:32.915 LINK ftl_sb_ut 00:21:33.173 00:21:33.173 real 2m1.578s 00:21:33.173 user 10m7.904s 00:21:33.173 sys 2m32.910s 00:21:33.173 15:58:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:21:33.173 ************************************ 00:21:33.173 END TEST unittest_build 00:21:33.174 ************************************ 00:21:33.174 15:58:37 -- common/autotest_common.sh@10 -- $ set +x 00:21:33.432 15:58:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.432 15:58:37 -- nvmf/common.sh@7 -- # uname -s 00:21:33.432 15:58:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.432 15:58:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.432 15:58:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.432 15:58:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.432 15:58:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.432 15:58:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.432 15:58:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.432 15:58:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.432 15:58:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.432 15:58:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.432 15:58:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71b8af37-fdb9-4e3a-a376-0d434c729595 00:21:33.432 15:58:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=71b8af37-fdb9-4e3a-a376-0d434c729595 00:21:33.432 15:58:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.432 15:58:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.432 15:58:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:33.432 15:58:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.432 15:58:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.432 15:58:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.432 15:58:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.432 15:58:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:33.432 15:58:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:33.432 15:58:37 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:33.432 15:58:37 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:33.432 15:58:37 -- paths/export.sh@6 -- # export PATH 00:21:33.432 15:58:37 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:21:33.432 15:58:37 -- nvmf/common.sh@46 -- # : 0 00:21:33.432 15:58:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:33.432 15:58:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:33.432 15:58:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:33.432 15:58:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.432 15:58:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.432 15:58:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:33.432 15:58:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:33.432 15:58:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:33.432 15:58:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:21:33.432 15:58:37 -- spdk/autotest.sh@32 -- # uname -s 00:21:33.432 15:58:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:21:33.432 15:58:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:21:33.432 15:58:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:33.432 15:58:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:21:33.432 15:58:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:21:33.432 15:58:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:21:33.432 15:58:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:21:33.432 15:58:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:21:33.432 15:58:37 -- spdk/autotest.sh@48 -- # udevadm_pid=51499 00:21:33.432 15:58:37 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:21:33.432 15:58:37 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:21:33.432 15:58:37 -- spdk/autotest.sh@54 -- # echo 51509 00:21:33.432 15:58:37 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:21:33.432 15:58:37 -- spdk/autotest.sh@56 -- # echo 51510 00:21:33.432 15:58:37 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:21:33.432 15:58:37 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:21:33.432 15:58:37 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:21:33.432 15:58:37 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:21:33.432 15:58:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:33.432 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:33.432 15:58:37 -- spdk/autotest.sh@70 -- # create_test_list 00:21:33.433 15:58:37 -- common/autotest_common.sh@736 -- # xtrace_disable 00:21:33.433 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:33.433 15:58:37 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:21:33.433 15:58:37 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:21:33.433 15:58:37 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:21:33.433 15:58:37 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:21:33.433 15:58:37 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:21:33.433 15:58:37 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:21:33.433 15:58:37 -- common/autotest_common.sh@1440 -- # uname 00:21:33.433 15:58:37 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:21:33.433 15:58:37 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:21:33.433 15:58:37 -- common/autotest_common.sh@1460 -- # uname 00:21:33.433 15:58:37 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:21:33.433 15:58:37 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:21:33.433 15:58:37 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:21:33.433 15:58:37 -- spdk/autotest.sh@83 -- # hash lcov 00:21:33.433 15:58:37 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:33.433 15:58:37 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:21:33.433 --rc lcov_branch_coverage=1 00:21:33.433 --rc lcov_function_coverage=1 00:21:33.433 --rc genhtml_branch_coverage=1 00:21:33.433 --rc genhtml_function_coverage=1 00:21:33.433 --rc genhtml_legend=1 00:21:33.433 --rc geninfo_all_blocks=1 00:21:33.433 ' 00:21:33.433 15:58:37 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:21:33.433 --rc lcov_branch_coverage=1 00:21:33.433 --rc lcov_function_coverage=1 00:21:33.433 --rc genhtml_branch_coverage=1 00:21:33.433 --rc genhtml_function_coverage=1 00:21:33.433 --rc genhtml_legend=1 00:21:33.433 --rc geninfo_all_blocks=1 00:21:33.433 ' 00:21:33.433 15:58:37 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:21:33.433 --rc lcov_branch_coverage=1 00:21:33.433 --rc lcov_function_coverage=1 00:21:33.433 --rc genhtml_branch_coverage=1 00:21:33.433 --rc genhtml_function_coverage=1 00:21:33.433 --rc genhtml_legend=1 00:21:33.433 --rc geninfo_all_blocks=1 00:21:33.433 --no-external' 00:21:33.433 15:58:37 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:21:33.433 --rc lcov_branch_coverage=1 00:21:33.433 --rc lcov_function_coverage=1 00:21:33.433 --rc genhtml_branch_coverage=1 00:21:33.433 --rc genhtml_function_coverage=1 00:21:33.433 --rc genhtml_legend=1 00:21:33.433 --rc geninfo_all_blocks=1 00:21:33.433 --no-external' 00:21:33.433 15:58:37 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:21:33.433 lcov: LCOV version 1.15 00:21:33.433 15:58:37 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:21:48.301 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:21:48.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:21:48.301 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:21:48.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:21:48.301 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:21:48.301 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:22:20.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:22:20.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:22:20.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:22:20.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:22:20.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:22:20.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:22:20.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:22:20.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:22:20.783 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:22:20.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:22:20.784 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:22:20.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:22:20.784 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:22:20.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:22:20.784 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:22:20.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:22:20.784 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:22:20.784 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:22:21.086 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:22:21.086 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:22:21.087 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:22:21.087 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:22:21.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:22:21.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:22:33.575 15:59:37 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:22:33.575 15:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:33.575 15:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:33.575 15:59:37 -- spdk/autotest.sh@102 -- # rm -f 00:22:33.575 15:59:37 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:33.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:33.575 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:22:33.575 15:59:37 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:22:33.575 15:59:37 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:33.575 15:59:37 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:33.575 15:59:37 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:33.576 15:59:37 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:33.576 15:59:37 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:33.576 15:59:37 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:33.576 15:59:37 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:33.576 15:59:37 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:33.576 15:59:37 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:22:33.576 15:59:37 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:22:33.576 15:59:37 -- spdk/autotest.sh@121 -- # grep -v p 00:22:33.576 15:59:37 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:22:33.576 15:59:37 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:22:33.576 15:59:37 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:22:33.576 15:59:37 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:22:33.576 15:59:37 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:22:33.576 No valid GPT data, bailing 00:22:33.576 15:59:37 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:33.576 15:59:37 -- scripts/common.sh@393 -- # pt= 00:22:33.576 15:59:37 -- scripts/common.sh@394 -- # return 1 00:22:33.576 15:59:37 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:22:33.576 1+0 records in 00:22:33.576 1+0 records out 00:22:33.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517495 s, 203 MB/s 00:22:33.576 15:59:37 -- spdk/autotest.sh@129 -- # sync 00:22:33.576 15:59:37 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:22:33.576 15:59:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:22:33.576 15:59:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:22:34.951 15:59:39 -- spdk/autotest.sh@135 -- # uname -s 00:22:34.951 15:59:39 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:22:34.951 15:59:39 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:22:34.951 15:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:34.951 15:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:34.951 15:59:39 -- common/autotest_common.sh@10 -- # set +x 00:22:34.951 ************************************ 00:22:34.951 START TEST setup.sh 00:22:34.951 ************************************ 00:22:34.951 15:59:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:22:34.951 * Looking for test storage... 00:22:34.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:34.951 15:59:39 -- setup/test-setup.sh@10 -- # uname -s 00:22:34.951 15:59:39 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:22:34.951 15:59:39 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:22:34.951 15:59:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:34.951 15:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:34.951 15:59:39 -- common/autotest_common.sh@10 -- # set +x 00:22:34.951 ************************************ 00:22:34.951 START TEST acl 00:22:34.951 ************************************ 00:22:34.951 15:59:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:22:35.209 * Looking for test storage... 00:22:35.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:35.209 15:59:39 -- setup/acl.sh@10 -- # get_zoned_devs 00:22:35.209 15:59:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:35.209 15:59:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:35.209 15:59:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:35.209 15:59:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:35.209 15:59:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:35.209 15:59:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:35.209 15:59:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:35.209 15:59:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:35.209 15:59:39 -- setup/acl.sh@12 -- # devs=() 00:22:35.209 15:59:39 -- setup/acl.sh@12 -- # declare -a devs 00:22:35.209 15:59:39 -- setup/acl.sh@13 -- # drivers=() 00:22:35.209 15:59:39 -- setup/acl.sh@13 -- # declare -A drivers 00:22:35.209 15:59:39 -- setup/acl.sh@51 -- # setup reset 00:22:35.209 15:59:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:35.209 15:59:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:35.467 15:59:39 -- setup/acl.sh@52 -- # collect_setup_devs 00:22:35.467 15:59:39 -- setup/acl.sh@16 -- # local dev driver 00:22:35.467 15:59:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:35.467 15:59:39 -- setup/acl.sh@15 -- # setup output status 00:22:35.467 15:59:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:35.467 15:59:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:35.725 Hugepages 00:22:35.725 node hugesize free / total 00:22:35.725 15:59:39 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:22:35.725 15:59:39 -- setup/acl.sh@19 -- # continue 00:22:35.725 15:59:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:35.725 00:22:35.725 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:35.725 15:59:39 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:22:35.725 15:59:39 -- setup/acl.sh@19 -- # continue 00:22:35.725 15:59:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:35.725 15:59:39 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:22:35.725 15:59:39 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:22:35.725 15:59:39 -- setup/acl.sh@20 -- # continue 00:22:35.725 15:59:39 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:35.984 15:59:40 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:22:35.984 15:59:40 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:22:35.984 15:59:40 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:22:35.984 15:59:40 -- setup/acl.sh@22 -- # devs+=("$dev") 00:22:35.984 15:59:40 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:22:35.984 15:59:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:22:35.984 15:59:40 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:22:35.984 15:59:40 -- setup/acl.sh@54 -- # run_test denied denied 00:22:35.984 15:59:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:35.984 15:59:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:35.984 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:22:35.984 ************************************ 00:22:35.984 START TEST denied 00:22:35.984 ************************************ 00:22:35.984 15:59:40 -- common/autotest_common.sh@1104 -- # denied 00:22:35.984 15:59:40 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:22:35.984 15:59:40 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:22:35.984 15:59:40 -- setup/acl.sh@38 -- # setup output config 00:22:35.984 15:59:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:35.984 15:59:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:37.924 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:22:37.924 15:59:41 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:22:37.924 15:59:41 -- setup/acl.sh@28 -- # local dev driver 00:22:37.924 15:59:41 -- setup/acl.sh@30 -- # for dev in "$@" 00:22:37.924 15:59:41 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:22:37.924 15:59:41 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:22:37.924 15:59:41 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:22:37.924 15:59:41 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:22:37.924 15:59:41 -- setup/acl.sh@41 -- # setup reset 00:22:37.924 15:59:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:37.924 15:59:41 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:38.183 00:22:38.183 real 0m2.181s 00:22:38.183 user 0m0.377s 00:22:38.183 sys 0m1.870s 00:22:38.183 15:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.183 ************************************ 00:22:38.183 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:38.183 END TEST denied 00:22:38.183 ************************************ 00:22:38.183 15:59:42 -- setup/acl.sh@55 -- # run_test allowed allowed 00:22:38.183 15:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:38.183 15:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:38.183 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:22:38.183 ************************************ 00:22:38.183 START TEST allowed 00:22:38.183 ************************************ 00:22:38.183 15:59:42 -- common/autotest_common.sh@1104 -- # allowed 00:22:38.183 15:59:42 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:22:38.183 15:59:42 -- setup/acl.sh@45 -- # setup output config 00:22:38.183 15:59:42 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:22:38.183 15:59:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:38.183 15:59:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:40.085 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:40.085 15:59:44 -- setup/acl.sh@47 -- # verify 00:22:40.085 15:59:44 -- setup/acl.sh@28 -- # local dev driver 00:22:40.085 15:59:44 -- setup/acl.sh@48 -- # setup reset 00:22:40.085 15:59:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:40.085 15:59:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:40.653 00:22:40.654 real 0m2.346s 00:22:40.654 user 0m0.334s 00:22:40.654 sys 0m2.052s 00:22:40.654 15:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.654 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:40.654 ************************************ 00:22:40.654 END TEST allowed 00:22:40.654 ************************************ 00:22:40.654 00:22:40.654 real 0m5.512s 00:22:40.654 user 0m1.084s 00:22:40.654 sys 0m4.593s 00:22:40.654 15:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.654 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:40.654 ************************************ 00:22:40.654 END TEST acl 00:22:40.654 ************************************ 00:22:40.654 15:59:44 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:22:40.654 15:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:40.654 15:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:40.654 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:40.654 ************************************ 00:22:40.654 START TEST hugepages 00:22:40.654 ************************************ 00:22:40.654 15:59:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:22:40.654 * Looking for test storage... 00:22:40.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:40.654 15:59:44 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:22:40.654 15:59:44 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:22:40.654 15:59:44 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:22:40.654 15:59:44 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:22:40.654 15:59:44 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:22:40.654 15:59:44 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:22:40.654 15:59:44 -- setup/common.sh@17 -- # local get=Hugepagesize 00:22:40.654 15:59:44 -- setup/common.sh@18 -- # local node= 00:22:40.654 15:59:44 -- setup/common.sh@19 -- # local var val 00:22:40.654 15:59:44 -- setup/common.sh@20 -- # local mem_f mem 00:22:40.654 15:59:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:40.654 15:59:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:40.654 15:59:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:40.654 15:59:44 -- setup/common.sh@28 -- # mapfile -t mem 00:22:40.654 15:59:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 2984960 kB' 'MemAvailable: 7364884 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 397508 kB' 'Inactive: 4232628 kB' 'Active(anon): 109552 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232628 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 127108 kB' 'Mapped: 58228 kB' 'Shmem: 2600 kB' 'KReclaimable: 181092 kB' 'Slab: 260332 kB' 'SReclaimable: 181092 kB' 'SUnreclaim: 79240 kB' 'KernelStack: 4924 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026008 kB' 'Committed_AS: 366932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.654 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.654 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # continue 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # IFS=': ' 00:22:40.655 15:59:44 -- setup/common.sh@31 -- # read -r var val _ 00:22:40.655 15:59:44 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:22:40.655 15:59:44 -- setup/common.sh@33 -- # echo 2048 00:22:40.655 15:59:44 -- setup/common.sh@33 -- # return 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:22:40.655 15:59:44 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:22:40.655 15:59:44 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:22:40.655 15:59:44 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:22:40.655 15:59:44 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:22:40.655 15:59:44 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:22:40.655 15:59:44 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:22:40.655 15:59:44 -- setup/hugepages.sh@207 -- # get_nodes 00:22:40.655 15:59:44 -- setup/hugepages.sh@27 -- # local node 00:22:40.655 15:59:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:40.655 15:59:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:22:40.655 15:59:44 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:40.655 15:59:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:40.655 15:59:44 -- setup/hugepages.sh@208 -- # clear_hp 00:22:40.655 15:59:44 -- setup/hugepages.sh@37 -- # local node hp 00:22:40.655 15:59:44 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:22:40.655 15:59:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:40.655 15:59:44 -- setup/hugepages.sh@41 -- # echo 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:40.655 15:59:44 -- setup/hugepages.sh@41 -- # echo 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:22:40.655 15:59:44 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:22:40.655 15:59:44 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:22:40.655 15:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:40.655 15:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:40.655 15:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:40.655 ************************************ 00:22:40.655 START TEST default_setup 00:22:40.655 ************************************ 00:22:40.655 15:59:44 -- common/autotest_common.sh@1104 -- # default_setup 00:22:40.655 15:59:44 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:40.655 15:59:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:40.655 15:59:44 -- setup/hugepages.sh@51 -- # shift 00:22:40.655 15:59:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:40.655 15:59:44 -- setup/hugepages.sh@52 -- # local node_ids 00:22:40.655 15:59:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:40.655 15:59:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:40.655 15:59:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:40.655 15:59:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:40.655 15:59:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:40.655 15:59:44 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:40.655 15:59:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:40.655 15:59:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:40.655 15:59:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:40.655 15:59:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:40.655 15:59:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:22:40.655 15:59:44 -- setup/hugepages.sh@73 -- # return 0 00:22:40.655 15:59:44 -- setup/hugepages.sh@137 -- # setup output 00:22:40.655 15:59:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:40.655 15:59:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:41.223 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:41.223 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:22:42.186 15:59:46 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:22:42.186 15:59:46 -- setup/hugepages.sh@89 -- # local node 00:22:42.186 15:59:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:42.186 15:59:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:42.186 15:59:46 -- setup/hugepages.sh@92 -- # local surp 00:22:42.186 15:59:46 -- setup/hugepages.sh@93 -- # local resv 00:22:42.186 15:59:46 -- setup/hugepages.sh@94 -- # local anon 00:22:42.186 15:59:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:42.186 15:59:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:42.186 15:59:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:42.186 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.186 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.186 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.186 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.186 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.186 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.186 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.186 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4961912 kB' 'MemAvailable: 9341780 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 413020 kB' 'Inactive: 4232636 kB' 'Active(anon): 125064 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142644 kB' 'Mapped: 58284 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260412 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79384 kB' 'KernelStack: 4992 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.186 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.186 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.187 15:59:46 -- setup/common.sh@33 -- # echo 0 00:22:42.187 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.187 15:59:46 -- setup/hugepages.sh@97 -- # anon=0 00:22:42.187 15:59:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:42.187 15:59:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:42.187 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.187 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.187 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.187 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.187 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.187 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.187 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.187 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4961912 kB' 'MemAvailable: 9341780 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 412824 kB' 'Inactive: 4232636 kB' 'Active(anon): 124868 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142728 kB' 'Mapped: 58284 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260412 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79384 kB' 'KernelStack: 4992 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.187 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.187 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.188 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.188 15:59:46 -- setup/common.sh@33 -- # echo 0 00:22:42.188 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.188 15:59:46 -- setup/hugepages.sh@99 -- # surp=0 00:22:42.188 15:59:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:42.188 15:59:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:42.188 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.188 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.188 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.188 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.188 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.188 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.188 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.188 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.188 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4961912 kB' 'MemAvailable: 9341780 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 412832 kB' 'Inactive: 4232636 kB' 'Active(anon): 124876 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142456 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260408 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79380 kB' 'KernelStack: 4960 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.189 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.189 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:42.190 15:59:46 -- setup/common.sh@33 -- # echo 0 00:22:42.190 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.190 15:59:46 -- setup/hugepages.sh@100 -- # resv=0 00:22:42.190 nr_hugepages=1024 00:22:42.190 15:59:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:42.190 resv_hugepages=0 00:22:42.190 15:59:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:42.190 surplus_hugepages=0 00:22:42.190 15:59:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:42.190 anon_hugepages=0 00:22:42.190 15:59:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:42.190 15:59:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:42.190 15:59:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:42.190 15:59:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:42.190 15:59:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:42.190 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.190 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.190 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.190 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.190 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.190 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.190 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.190 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4972892 kB' 'MemAvailable: 9352760 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 412848 kB' 'Inactive: 4232636 kB' 'Active(anon): 124892 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142516 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260408 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79380 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.190 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.190 15:59:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.191 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.191 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:42.191 15:59:46 -- setup/common.sh@33 -- # echo 1024 00:22:42.191 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.191 15:59:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:42.191 15:59:46 -- setup/hugepages.sh@112 -- # get_nodes 00:22:42.191 15:59:46 -- setup/hugepages.sh@27 -- # local node 00:22:42.191 15:59:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:42.191 15:59:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:42.191 15:59:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:42.191 15:59:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:42.191 15:59:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:42.191 15:59:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:42.191 15:59:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:42.191 15:59:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:42.192 15:59:46 -- setup/common.sh@18 -- # local node=0 00:22:42.192 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.192 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.192 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.192 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:42.192 15:59:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:42.192 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.192 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4974172 kB' 'MemUsed: 7272148 kB' 'SwapCached: 0 kB' 'Active: 412816 kB' 'Inactive: 4232636 kB' 'Active(anon): 124860 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4531932 kB' 'Mapped: 58276 kB' 'AnonPages: 142460 kB' 'Shmem: 2592 kB' 'KernelStack: 4960 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260408 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.192 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.192 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.193 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.193 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.193 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.193 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.193 15:59:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.193 15:59:46 -- setup/common.sh@33 -- # echo 0 00:22:42.193 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.193 15:59:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:42.193 15:59:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:42.193 15:59:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:42.193 15:59:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:42.193 node0=1024 expecting 1024 00:22:42.193 15:59:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:42.193 15:59:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:42.193 00:22:42.193 real 0m1.413s 00:22:42.193 user 0m0.308s 00:22:42.193 sys 0m1.103s 00:22:42.193 15:59:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.193 15:59:46 -- common/autotest_common.sh@10 -- # set +x 00:22:42.193 ************************************ 00:22:42.193 END TEST default_setup 00:22:42.193 ************************************ 00:22:42.193 15:59:46 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:22:42.193 15:59:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:42.193 15:59:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:42.193 15:59:46 -- common/autotest_common.sh@10 -- # set +x 00:22:42.193 ************************************ 00:22:42.193 START TEST per_node_1G_alloc 00:22:42.193 ************************************ 00:22:42.193 15:59:46 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:22:42.193 15:59:46 -- setup/hugepages.sh@143 -- # local IFS=, 00:22:42.193 15:59:46 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:22:42.193 15:59:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:22:42.193 15:59:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:42.193 15:59:46 -- setup/hugepages.sh@51 -- # shift 00:22:42.193 15:59:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:42.193 15:59:46 -- setup/hugepages.sh@52 -- # local node_ids 00:22:42.193 15:59:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:42.193 15:59:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:22:42.193 15:59:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:42.193 15:59:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:42.193 15:59:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:42.193 15:59:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:42.193 15:59:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:42.193 15:59:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:42.193 15:59:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:42.193 15:59:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:42.193 15:59:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:42.193 15:59:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:22:42.193 15:59:46 -- setup/hugepages.sh@73 -- # return 0 00:22:42.193 15:59:46 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:22:42.193 15:59:46 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:22:42.193 15:59:46 -- setup/hugepages.sh@146 -- # setup output 00:22:42.193 15:59:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:42.193 15:59:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:42.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:42.476 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:42.735 15:59:46 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:22:42.735 15:59:46 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:22:42.735 15:59:46 -- setup/hugepages.sh@89 -- # local node 00:22:42.735 15:59:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:42.735 15:59:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:42.735 15:59:46 -- setup/hugepages.sh@92 -- # local surp 00:22:42.735 15:59:46 -- setup/hugepages.sh@93 -- # local resv 00:22:42.735 15:59:46 -- setup/hugepages.sh@94 -- # local anon 00:22:42.735 15:59:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:42.735 15:59:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:42.735 15:59:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:42.735 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.735 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.735 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.735 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.735 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.735 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.735 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.735 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.735 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.735 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.735 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6028136 kB' 'MemAvailable: 10408004 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 413244 kB' 'Inactive: 4232636 kB' 'Active(anon): 125288 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142956 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260444 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79416 kB' 'KernelStack: 5024 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 383008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.735 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.736 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.736 15:59:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:42.736 15:59:46 -- setup/common.sh@33 -- # echo 0 00:22:42.736 15:59:46 -- setup/common.sh@33 -- # return 0 00:22:42.736 15:59:46 -- setup/hugepages.sh@97 -- # anon=0 00:22:42.737 15:59:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:42.737 15:59:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:42.737 15:59:46 -- setup/common.sh@18 -- # local node= 00:22:42.737 15:59:46 -- setup/common.sh@19 -- # local var val 00:22:42.737 15:59:46 -- setup/common.sh@20 -- # local mem_f mem 00:22:42.737 15:59:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:42.737 15:59:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:42.737 15:59:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:42.737 15:59:46 -- setup/common.sh@28 -- # mapfile -t mem 00:22:42.737 15:59:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6028388 kB' 'MemAvailable: 10408256 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 412876 kB' 'Inactive: 4232636 kB' 'Active(anon): 124920 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142588 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260436 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79408 kB' 'KernelStack: 5008 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:46 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:46 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:42.737 15:59:47 -- setup/common.sh@32 -- # continue 00:22:42.737 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.000 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.000 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.000 15:59:47 -- setup/common.sh@33 -- # echo 0 00:22:43.000 15:59:47 -- setup/common.sh@33 -- # return 0 00:22:43.000 15:59:47 -- setup/hugepages.sh@99 -- # surp=0 00:22:43.000 15:59:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:43.000 15:59:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:43.000 15:59:47 -- setup/common.sh@18 -- # local node= 00:22:43.000 15:59:47 -- setup/common.sh@19 -- # local var val 00:22:43.000 15:59:47 -- setup/common.sh@20 -- # local mem_f mem 00:22:43.000 15:59:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:43.000 15:59:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:43.001 15:59:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:43.001 15:59:47 -- setup/common.sh@28 -- # mapfile -t mem 00:22:43.001 15:59:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6028640 kB' 'MemAvailable: 10408508 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 412752 kB' 'Inactive: 4232636 kB' 'Active(anon): 124796 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142452 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260412 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79384 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.001 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.001 15:59:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:43.002 15:59:47 -- setup/common.sh@33 -- # echo 0 00:22:43.002 15:59:47 -- setup/common.sh@33 -- # return 0 00:22:43.002 15:59:47 -- setup/hugepages.sh@100 -- # resv=0 00:22:43.002 nr_hugepages=512 00:22:43.002 15:59:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:22:43.002 resv_hugepages=0 00:22:43.002 15:59:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:43.002 surplus_hugepages=0 00:22:43.002 15:59:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:43.002 anon_hugepages=0 00:22:43.002 15:59:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:43.002 15:59:47 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:43.002 15:59:47 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:22:43.002 15:59:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:43.002 15:59:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:43.002 15:59:47 -- setup/common.sh@18 -- # local node= 00:22:43.002 15:59:47 -- setup/common.sh@19 -- # local var val 00:22:43.002 15:59:47 -- setup/common.sh@20 -- # local mem_f mem 00:22:43.002 15:59:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:43.002 15:59:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:43.002 15:59:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:43.002 15:59:47 -- setup/common.sh@28 -- # mapfile -t mem 00:22:43.002 15:59:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6028640 kB' 'MemAvailable: 10408508 kB' 'Buffers: 36216 kB' 'Cached: 4495716 kB' 'SwapCached: 0 kB' 'Active: 413008 kB' 'Inactive: 4232636 kB' 'Active(anon): 125052 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142448 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260412 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79384 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.002 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.002 15:59:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:43.003 15:59:47 -- setup/common.sh@33 -- # echo 512 00:22:43.003 15:59:47 -- setup/common.sh@33 -- # return 0 00:22:43.003 15:59:47 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:43.003 15:59:47 -- setup/hugepages.sh@112 -- # get_nodes 00:22:43.003 15:59:47 -- setup/hugepages.sh@27 -- # local node 00:22:43.003 15:59:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:43.003 15:59:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:22:43.003 15:59:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:43.003 15:59:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:43.003 15:59:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:43.003 15:59:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:43.003 15:59:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:43.003 15:59:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:43.003 15:59:47 -- setup/common.sh@18 -- # local node=0 00:22:43.003 15:59:47 -- setup/common.sh@19 -- # local var val 00:22:43.003 15:59:47 -- setup/common.sh@20 -- # local mem_f mem 00:22:43.003 15:59:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:43.003 15:59:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:43.003 15:59:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:43.003 15:59:47 -- setup/common.sh@28 -- # mapfile -t mem 00:22:43.003 15:59:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6028892 kB' 'MemUsed: 6217428 kB' 'SwapCached: 0 kB' 'Active: 412596 kB' 'Inactive: 4232636 kB' 'Active(anon): 124640 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232636 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4531932 kB' 'Mapped: 58276 kB' 'AnonPages: 142284 kB' 'Shmem: 2592 kB' 'KernelStack: 4976 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260412 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.003 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.003 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.004 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.004 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.005 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.005 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.005 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.005 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.005 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.005 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.005 15:59:47 -- setup/common.sh@32 -- # continue 00:22:43.005 15:59:47 -- setup/common.sh@31 -- # IFS=': ' 00:22:43.005 15:59:47 -- setup/common.sh@31 -- # read -r var val _ 00:22:43.005 15:59:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:43.005 15:59:47 -- setup/common.sh@33 -- # echo 0 00:22:43.005 15:59:47 -- setup/common.sh@33 -- # return 0 00:22:43.005 15:59:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:43.005 15:59:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:43.005 15:59:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:43.005 node0=512 expecting 512 00:22:43.005 15:59:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:22:43.005 15:59:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:22:43.005 00:22:43.005 real 0m0.750s 00:22:43.005 user 0m0.221s 00:22:43.005 sys 0m0.571s 00:22:43.005 15:59:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.005 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.005 ************************************ 00:22:43.005 END TEST per_node_1G_alloc 00:22:43.005 ************************************ 00:22:43.005 15:59:47 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:22:43.005 15:59:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:43.005 15:59:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:43.005 15:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:43.005 ************************************ 00:22:43.005 START TEST even_2G_alloc 00:22:43.005 ************************************ 00:22:43.005 15:59:47 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:22:43.005 15:59:47 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:22:43.005 15:59:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:43.005 15:59:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:43.005 15:59:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:43.005 15:59:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:43.005 15:59:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:43.005 15:59:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:43.005 15:59:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:43.005 15:59:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:43.005 15:59:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:43.005 15:59:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:22:43.005 15:59:47 -- setup/hugepages.sh@83 -- # : 0 00:22:43.005 15:59:47 -- setup/hugepages.sh@84 -- # : 0 00:22:43.005 15:59:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:43.005 15:59:47 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:22:43.005 15:59:47 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:22:43.005 15:59:47 -- setup/hugepages.sh@153 -- # setup output 00:22:43.005 15:59:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:43.005 15:59:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:43.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:43.264 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:44.204 15:59:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:22:44.204 15:59:48 -- setup/hugepages.sh@89 -- # local node 00:22:44.204 15:59:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:44.204 15:59:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:44.204 15:59:48 -- setup/hugepages.sh@92 -- # local surp 00:22:44.204 15:59:48 -- setup/hugepages.sh@93 -- # local resv 00:22:44.204 15:59:48 -- setup/hugepages.sh@94 -- # local anon 00:22:44.204 15:59:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:44.204 15:59:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:44.204 15:59:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:44.204 15:59:48 -- setup/common.sh@18 -- # local node= 00:22:44.204 15:59:48 -- setup/common.sh@19 -- # local var val 00:22:44.204 15:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:22:44.204 15:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:44.204 15:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:44.204 15:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:44.204 15:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:22:44.204 15:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4997716 kB' 'MemAvailable: 9377588 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 413000 kB' 'Inactive: 4232640 kB' 'Active(anon): 125044 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142700 kB' 'Mapped: 58292 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260408 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79380 kB' 'KernelStack: 4992 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.204 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.204 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:44.205 15:59:48 -- setup/common.sh@33 -- # echo 0 00:22:44.205 15:59:48 -- setup/common.sh@33 -- # return 0 00:22:44.205 15:59:48 -- setup/hugepages.sh@97 -- # anon=0 00:22:44.205 15:59:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:44.205 15:59:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:44.205 15:59:48 -- setup/common.sh@18 -- # local node= 00:22:44.205 15:59:48 -- setup/common.sh@19 -- # local var val 00:22:44.205 15:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:22:44.205 15:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:44.205 15:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:44.205 15:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:44.205 15:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:22:44.205 15:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4997716 kB' 'MemAvailable: 9377588 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412624 kB' 'Inactive: 4232640 kB' 'Active(anon): 124668 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142324 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.205 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.205 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.206 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.206 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.207 15:59:48 -- setup/common.sh@33 -- # echo 0 00:22:44.207 15:59:48 -- setup/common.sh@33 -- # return 0 00:22:44.207 15:59:48 -- setup/hugepages.sh@99 -- # surp=0 00:22:44.207 15:59:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:44.207 15:59:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:44.207 15:59:48 -- setup/common.sh@18 -- # local node= 00:22:44.207 15:59:48 -- setup/common.sh@19 -- # local var val 00:22:44.207 15:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:22:44.207 15:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:44.207 15:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:44.207 15:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:44.207 15:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:22:44.207 15:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 4997968 kB' 'MemAvailable: 9377840 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412864 kB' 'Inactive: 4232640 kB' 'Active(anon): 124908 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142592 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.207 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.207 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:44.208 15:59:48 -- setup/common.sh@33 -- # echo 0 00:22:44.208 15:59:48 -- setup/common.sh@33 -- # return 0 00:22:44.208 15:59:48 -- setup/hugepages.sh@100 -- # resv=0 00:22:44.208 nr_hugepages=1024 00:22:44.208 15:59:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:44.208 resv_hugepages=0 00:22:44.208 15:59:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:44.208 surplus_hugepages=0 00:22:44.208 15:59:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:44.208 anon_hugepages=0 00:22:44.208 15:59:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:44.208 15:59:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:44.208 15:59:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:44.208 15:59:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:44.208 15:59:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:44.208 15:59:48 -- setup/common.sh@18 -- # local node= 00:22:44.208 15:59:48 -- setup/common.sh@19 -- # local var val 00:22:44.208 15:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:22:44.208 15:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:44.208 15:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:44.208 15:59:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:44.208 15:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:22:44.208 15:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5007148 kB' 'MemAvailable: 9387020 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412924 kB' 'Inactive: 4232640 kB' 'Active(anon): 124968 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142644 kB' 'Mapped: 58276 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4976 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 383388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.208 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.208 15:59:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.209 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.209 15:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:44.210 15:59:48 -- setup/common.sh@33 -- # echo 1024 00:22:44.210 15:59:48 -- setup/common.sh@33 -- # return 0 00:22:44.210 15:59:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:44.210 15:59:48 -- setup/hugepages.sh@112 -- # get_nodes 00:22:44.210 15:59:48 -- setup/hugepages.sh@27 -- # local node 00:22:44.210 15:59:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:44.210 15:59:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:44.210 15:59:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:44.210 15:59:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:44.210 15:59:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:44.210 15:59:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:44.210 15:59:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:44.210 15:59:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:44.210 15:59:48 -- setup/common.sh@18 -- # local node=0 00:22:44.210 15:59:48 -- setup/common.sh@19 -- # local var val 00:22:44.210 15:59:48 -- setup/common.sh@20 -- # local mem_f mem 00:22:44.210 15:59:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:44.210 15:59:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:44.210 15:59:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:44.210 15:59:48 -- setup/common.sh@28 -- # mapfile -t mem 00:22:44.210 15:59:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5007772 kB' 'MemUsed: 7238548 kB' 'SwapCached: 0 kB' 'Active: 412620 kB' 'Inactive: 4232640 kB' 'Active(anon): 124664 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4531936 kB' 'Mapped: 58276 kB' 'AnonPages: 142324 kB' 'Shmem: 2592 kB' 'KernelStack: 4976 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.210 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.210 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # continue 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # IFS=': ' 00:22:44.211 15:59:48 -- setup/common.sh@31 -- # read -r var val _ 00:22:44.211 15:59:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:44.211 15:59:48 -- setup/common.sh@33 -- # echo 0 00:22:44.211 15:59:48 -- setup/common.sh@33 -- # return 0 00:22:44.211 15:59:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:44.211 15:59:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:44.211 15:59:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:44.211 15:59:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:44.211 node0=1024 expecting 1024 00:22:44.211 15:59:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:44.211 15:59:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:44.211 00:22:44.211 real 0m1.185s 00:22:44.211 user 0m0.254s 00:22:44.211 sys 0m0.972s 00:22:44.211 15:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.211 15:59:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.211 ************************************ 00:22:44.211 END TEST even_2G_alloc 00:22:44.211 ************************************ 00:22:44.211 15:59:48 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:22:44.211 15:59:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:44.211 15:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.211 15:59:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.211 ************************************ 00:22:44.211 START TEST odd_alloc 00:22:44.211 ************************************ 00:22:44.211 15:59:48 -- common/autotest_common.sh@1104 -- # odd_alloc 00:22:44.211 15:59:48 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:22:44.211 15:59:48 -- setup/hugepages.sh@49 -- # local size=2098176 00:22:44.211 15:59:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:44.211 15:59:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:44.211 15:59:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:22:44.211 15:59:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:44.211 15:59:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:44.211 15:59:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:44.211 15:59:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:22:44.211 15:59:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:44.212 15:59:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:44.212 15:59:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:44.212 15:59:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:44.212 15:59:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:44.212 15:59:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:44.212 15:59:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:22:44.212 15:59:48 -- setup/hugepages.sh@83 -- # : 0 00:22:44.212 15:59:48 -- setup/hugepages.sh@84 -- # : 0 00:22:44.212 15:59:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:44.212 15:59:48 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:22:44.212 15:59:48 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:22:44.212 15:59:48 -- setup/hugepages.sh@160 -- # setup output 00:22:44.212 15:59:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:44.212 15:59:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:44.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:44.470 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:45.038 15:59:49 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:22:45.038 15:59:49 -- setup/hugepages.sh@89 -- # local node 00:22:45.038 15:59:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:45.038 15:59:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:45.038 15:59:49 -- setup/hugepages.sh@92 -- # local surp 00:22:45.038 15:59:49 -- setup/hugepages.sh@93 -- # local resv 00:22:45.038 15:59:49 -- setup/hugepages.sh@94 -- # local anon 00:22:45.038 15:59:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:45.038 15:59:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:45.038 15:59:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:45.038 15:59:49 -- setup/common.sh@18 -- # local node= 00:22:45.038 15:59:49 -- setup/common.sh@19 -- # local var val 00:22:45.038 15:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.038 15:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.038 15:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.038 15:59:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.038 15:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.038 15:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5006440 kB' 'MemAvailable: 9386312 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411872 kB' 'Inactive: 4232640 kB' 'Active(anon): 123916 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141556 kB' 'Mapped: 57412 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4944 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.038 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.038 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.039 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.039 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.040 15:59:49 -- setup/common.sh@33 -- # echo 0 00:22:45.040 15:59:49 -- setup/common.sh@33 -- # return 0 00:22:45.040 15:59:49 -- setup/hugepages.sh@97 -- # anon=0 00:22:45.040 15:59:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:45.040 15:59:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:45.040 15:59:49 -- setup/common.sh@18 -- # local node= 00:22:45.040 15:59:49 -- setup/common.sh@19 -- # local var val 00:22:45.040 15:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.040 15:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.040 15:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.040 15:59:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.040 15:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.040 15:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5013740 kB' 'MemAvailable: 9393612 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411996 kB' 'Inactive: 4232640 kB' 'Active(anon): 124040 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141388 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4912 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.040 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.040 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.041 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.041 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.042 15:59:49 -- setup/common.sh@33 -- # echo 0 00:22:45.042 15:59:49 -- setup/common.sh@33 -- # return 0 00:22:45.042 15:59:49 -- setup/hugepages.sh@99 -- # surp=0 00:22:45.042 15:59:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:45.042 15:59:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:45.042 15:59:49 -- setup/common.sh@18 -- # local node= 00:22:45.042 15:59:49 -- setup/common.sh@19 -- # local var val 00:22:45.042 15:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.042 15:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.042 15:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.042 15:59:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.042 15:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.042 15:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5013960 kB' 'MemAvailable: 9393832 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411820 kB' 'Inactive: 4232640 kB' 'Active(anon): 123864 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141484 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4928 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.042 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.042 15:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.303 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.303 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:45.304 15:59:49 -- setup/common.sh@33 -- # echo 0 00:22:45.304 15:59:49 -- setup/common.sh@33 -- # return 0 00:22:45.304 15:59:49 -- setup/hugepages.sh@100 -- # resv=0 00:22:45.304 nr_hugepages=1025 00:22:45.304 15:59:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:22:45.304 resv_hugepages=0 00:22:45.304 15:59:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:45.304 surplus_hugepages=0 00:22:45.304 15:59:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:45.304 anon_hugepages=0 00:22:45.304 15:59:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:45.304 15:59:49 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:22:45.304 15:59:49 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:22:45.304 15:59:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:45.304 15:59:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:45.304 15:59:49 -- setup/common.sh@18 -- # local node= 00:22:45.304 15:59:49 -- setup/common.sh@19 -- # local var val 00:22:45.304 15:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.304 15:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.304 15:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.304 15:59:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.304 15:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.304 15:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5013708 kB' 'MemAvailable: 9393580 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411824 kB' 'Inactive: 4232640 kB' 'Active(anon): 123868 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141480 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260400 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79372 kB' 'KernelStack: 4928 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20008 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.304 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.304 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:45.305 15:59:49 -- setup/common.sh@33 -- # echo 1025 00:22:45.305 15:59:49 -- setup/common.sh@33 -- # return 0 00:22:45.305 15:59:49 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:22:45.305 15:59:49 -- setup/hugepages.sh@112 -- # get_nodes 00:22:45.305 15:59:49 -- setup/hugepages.sh@27 -- # local node 00:22:45.305 15:59:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:45.305 15:59:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:22:45.305 15:59:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:45.305 15:59:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:45.305 15:59:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:45.305 15:59:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:45.305 15:59:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:45.305 15:59:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:45.305 15:59:49 -- setup/common.sh@18 -- # local node=0 00:22:45.305 15:59:49 -- setup/common.sh@19 -- # local var val 00:22:45.305 15:59:49 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.305 15:59:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.305 15:59:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:45.305 15:59:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:45.305 15:59:49 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.305 15:59:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5013708 kB' 'MemUsed: 7232612 kB' 'SwapCached: 0 kB' 'Active: 411876 kB' 'Inactive: 4232640 kB' 'Active(anon): 123920 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4531936 kB' 'Mapped: 57396 kB' 'AnonPages: 141532 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260396 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.305 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.305 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # continue 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.306 15:59:49 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.306 15:59:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.306 15:59:49 -- setup/common.sh@33 -- # echo 0 00:22:45.306 15:59:49 -- setup/common.sh@33 -- # return 0 00:22:45.306 15:59:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:45.306 15:59:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:45.306 node0=1025 expecting 1025 00:22:45.306 15:59:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:45.306 15:59:49 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:22:45.306 15:59:49 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:22:45.306 00:22:45.306 real 0m0.999s 00:22:45.306 user 0m0.226s 00:22:45.306 sys 0m0.814s 00:22:45.306 15:59:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:45.306 ************************************ 00:22:45.306 END TEST odd_alloc 00:22:45.306 ************************************ 00:22:45.306 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:22:45.306 15:59:49 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:22:45.306 15:59:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:45.306 15:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:45.306 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:22:45.306 ************************************ 00:22:45.306 START TEST custom_alloc 00:22:45.306 ************************************ 00:22:45.306 15:59:49 -- common/autotest_common.sh@1104 -- # custom_alloc 00:22:45.306 15:59:49 -- setup/hugepages.sh@167 -- # local IFS=, 00:22:45.306 15:59:49 -- setup/hugepages.sh@169 -- # local node 00:22:45.306 15:59:49 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:22:45.306 15:59:49 -- setup/hugepages.sh@170 -- # local nodes_hp 00:22:45.306 15:59:49 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:22:45.306 15:59:49 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:22:45.306 15:59:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:22:45.306 15:59:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:22:45.306 15:59:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:45.306 15:59:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:45.306 15:59:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:45.306 15:59:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:45.306 15:59:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:45.306 15:59:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@83 -- # : 0 00:22:45.306 15:59:49 -- setup/hugepages.sh@84 -- # : 0 00:22:45.306 15:59:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:22:45.306 15:59:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:22:45.306 15:59:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:22:45.306 15:59:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:22:45.306 15:59:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:45.306 15:59:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:45.306 15:59:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:45.306 15:59:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:45.306 15:59:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:22:45.306 15:59:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:22:45.306 15:59:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:22:45.306 15:59:49 -- setup/hugepages.sh@78 -- # return 0 00:22:45.307 15:59:49 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:22:45.307 15:59:49 -- setup/hugepages.sh@187 -- # setup output 00:22:45.307 15:59:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:45.307 15:59:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:45.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:45.565 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:45.825 15:59:50 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:22:45.825 15:59:50 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:22:45.825 15:59:50 -- setup/hugepages.sh@89 -- # local node 00:22:45.825 15:59:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:45.825 15:59:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:45.825 15:59:50 -- setup/hugepages.sh@92 -- # local surp 00:22:45.825 15:59:50 -- setup/hugepages.sh@93 -- # local resv 00:22:45.825 15:59:50 -- setup/hugepages.sh@94 -- # local anon 00:22:45.825 15:59:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:45.825 15:59:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:45.825 15:59:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:45.825 15:59:50 -- setup/common.sh@18 -- # local node= 00:22:45.825 15:59:50 -- setup/common.sh@19 -- # local var val 00:22:45.825 15:59:50 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.825 15:59:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.825 15:59:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.825 15:59:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.825 15:59:50 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.825 15:59:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.825 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.825 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6066460 kB' 'MemAvailable: 10446332 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411856 kB' 'Inactive: 4232640 kB' 'Active(anon): 123900 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141484 kB' 'Mapped: 57368 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260384 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79356 kB' 'KernelStack: 4960 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.826 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.826 15:59:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:45.827 15:59:50 -- setup/common.sh@33 -- # echo 0 00:22:45.827 15:59:50 -- setup/common.sh@33 -- # return 0 00:22:45.827 15:59:50 -- setup/hugepages.sh@97 -- # anon=0 00:22:45.827 15:59:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:45.827 15:59:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:45.827 15:59:50 -- setup/common.sh@18 -- # local node= 00:22:45.827 15:59:50 -- setup/common.sh@19 -- # local var val 00:22:45.827 15:59:50 -- setup/common.sh@20 -- # local mem_f mem 00:22:45.827 15:59:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:45.827 15:59:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:45.827 15:59:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:45.827 15:59:50 -- setup/common.sh@28 -- # mapfile -t mem 00:22:45.827 15:59:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6066460 kB' 'MemAvailable: 10446332 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411856 kB' 'Inactive: 4232640 kB' 'Active(anon): 123900 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141744 kB' 'Mapped: 57368 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260384 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79356 kB' 'KernelStack: 4960 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.827 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.827 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:45.828 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:45.828 15:59:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:45.828 15:59:50 -- setup/common.sh@32 -- # continue 00:22:45.828 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.089 15:59:50 -- setup/common.sh@33 -- # echo 0 00:22:46.089 15:59:50 -- setup/common.sh@33 -- # return 0 00:22:46.089 15:59:50 -- setup/hugepages.sh@99 -- # surp=0 00:22:46.089 15:59:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:46.089 15:59:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:46.089 15:59:50 -- setup/common.sh@18 -- # local node= 00:22:46.089 15:59:50 -- setup/common.sh@19 -- # local var val 00:22:46.089 15:59:50 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.089 15:59:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.089 15:59:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.089 15:59:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.089 15:59:50 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.089 15:59:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6066460 kB' 'MemAvailable: 10446332 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411508 kB' 'Inactive: 4232640 kB' 'Active(anon): 123552 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141364 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260384 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79356 kB' 'KernelStack: 4880 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.089 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.089 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.090 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.090 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.091 15:59:50 -- setup/common.sh@33 -- # echo 0 00:22:46.091 15:59:50 -- setup/common.sh@33 -- # return 0 00:22:46.091 15:59:50 -- setup/hugepages.sh@100 -- # resv=0 00:22:46.091 nr_hugepages=512 00:22:46.091 resv_hugepages=0 00:22:46.091 15:59:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:22:46.091 15:59:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:46.091 surplus_hugepages=0 00:22:46.091 anon_hugepages=0 00:22:46.091 15:59:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:46.091 15:59:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:46.091 15:59:50 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:46.091 15:59:50 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:22:46.091 15:59:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:46.091 15:59:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:46.091 15:59:50 -- setup/common.sh@18 -- # local node= 00:22:46.091 15:59:50 -- setup/common.sh@19 -- # local var val 00:22:46.091 15:59:50 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.091 15:59:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.091 15:59:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.091 15:59:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.091 15:59:50 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.091 15:59:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6066460 kB' 'MemAvailable: 10446332 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411452 kB' 'Inactive: 4232640 kB' 'Active(anon): 123496 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141304 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260384 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79356 kB' 'KernelStack: 4864 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 374280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.091 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.091 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.092 15:59:50 -- setup/common.sh@33 -- # echo 512 00:22:46.092 15:59:50 -- setup/common.sh@33 -- # return 0 00:22:46.092 15:59:50 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:22:46.092 15:59:50 -- setup/hugepages.sh@112 -- # get_nodes 00:22:46.092 15:59:50 -- setup/hugepages.sh@27 -- # local node 00:22:46.092 15:59:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:46.092 15:59:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:22:46.092 15:59:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:46.092 15:59:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:46.092 15:59:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:46.092 15:59:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:46.092 15:59:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:46.092 15:59:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:46.092 15:59:50 -- setup/common.sh@18 -- # local node=0 00:22:46.092 15:59:50 -- setup/common.sh@19 -- # local var val 00:22:46.092 15:59:50 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.092 15:59:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.092 15:59:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:46.092 15:59:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:46.092 15:59:50 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.092 15:59:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 6066460 kB' 'MemUsed: 6179860 kB' 'SwapCached: 0 kB' 'Active: 411564 kB' 'Inactive: 4232640 kB' 'Active(anon): 123608 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4531936 kB' 'Mapped: 57396 kB' 'AnonPages: 141484 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260384 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.092 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.092 15:59:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # continue 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.093 15:59:50 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.093 15:59:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.093 15:59:50 -- setup/common.sh@33 -- # echo 0 00:22:46.093 15:59:50 -- setup/common.sh@33 -- # return 0 00:22:46.093 15:59:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:46.093 15:59:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:46.093 15:59:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:46.093 15:59:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:46.093 node0=512 expecting 512 00:22:46.093 15:59:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:22:46.093 15:59:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:22:46.093 00:22:46.093 real 0m0.754s 00:22:46.093 user 0m0.236s 00:22:46.093 sys 0m0.560s 00:22:46.093 15:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:46.093 ************************************ 00:22:46.093 END TEST custom_alloc 00:22:46.093 ************************************ 00:22:46.093 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:22:46.093 15:59:50 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:22:46.093 15:59:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:46.093 15:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:46.093 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:22:46.093 ************************************ 00:22:46.093 START TEST no_shrink_alloc 00:22:46.093 ************************************ 00:22:46.093 15:59:50 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:22:46.093 15:59:50 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:22:46.093 15:59:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:22:46.093 15:59:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:22:46.093 15:59:50 -- setup/hugepages.sh@51 -- # shift 00:22:46.093 15:59:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:22:46.093 15:59:50 -- setup/hugepages.sh@52 -- # local node_ids 00:22:46.093 15:59:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:22:46.093 15:59:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:22:46.093 15:59:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:22:46.093 15:59:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:22:46.093 15:59:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:22:46.093 15:59:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:22:46.093 15:59:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:22:46.093 15:59:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:22:46.093 15:59:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:22:46.094 15:59:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:22:46.094 15:59:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:22:46.094 15:59:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:22:46.094 15:59:50 -- setup/hugepages.sh@73 -- # return 0 00:22:46.094 15:59:50 -- setup/hugepages.sh@198 -- # setup output 00:22:46.094 15:59:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:46.094 15:59:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:46.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:46.352 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:46.923 15:59:51 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:22:46.923 15:59:51 -- setup/hugepages.sh@89 -- # local node 00:22:46.923 15:59:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:46.923 15:59:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:46.923 15:59:51 -- setup/hugepages.sh@92 -- # local surp 00:22:46.923 15:59:51 -- setup/hugepages.sh@93 -- # local resv 00:22:46.923 15:59:51 -- setup/hugepages.sh@94 -- # local anon 00:22:46.923 15:59:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:46.923 15:59:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:46.923 15:59:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:46.923 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:46.923 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:46.923 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.923 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.923 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.923 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.923 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.923 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.923 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.923 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5031372 kB' 'MemAvailable: 9411244 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411912 kB' 'Inactive: 4232640 kB' 'Active(anon): 123956 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141568 kB' 'Mapped: 57356 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 5024 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.924 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.924 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:46.925 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:46.925 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:46.925 15:59:51 -- setup/hugepages.sh@97 -- # anon=0 00:22:46.925 15:59:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:46.925 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:46.925 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:46.925 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:46.925 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.925 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.925 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.925 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.925 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.925 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5031372 kB' 'MemAvailable: 9411244 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412092 kB' 'Inactive: 4232640 kB' 'Active(anon): 124136 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141720 kB' 'Mapped: 57356 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 4976 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19944 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.925 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.925 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.926 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:46.926 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:46.926 15:59:51 -- setup/hugepages.sh@99 -- # surp=0 00:22:46.926 15:59:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:46.926 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:46.926 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:46.926 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:46.926 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.926 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.926 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.926 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.926 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.926 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5031372 kB' 'MemAvailable: 9411244 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411940 kB' 'Inactive: 4232640 kB' 'Active(anon): 123984 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141564 kB' 'Mapped: 57344 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 4976 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.926 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.926 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.927 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.927 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:46.928 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:46.928 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:46.928 15:59:51 -- setup/hugepages.sh@100 -- # resv=0 00:22:46.928 nr_hugepages=1024 00:22:46.928 15:59:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:46.928 resv_hugepages=0 00:22:46.928 15:59:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:46.928 surplus_hugepages=0 00:22:46.928 15:59:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:46.928 anon_hugepages=0 00:22:46.928 15:59:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:46.928 15:59:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:46.928 15:59:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:46.928 15:59:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:46.928 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:46.928 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:46.928 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:46.928 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.928 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.928 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:46.928 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:46.928 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.928 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.928 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5031372 kB' 'MemAvailable: 9411244 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411996 kB' 'Inactive: 4232640 kB' 'Active(anon): 124040 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141652 kB' 'Mapped: 57344 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 4992 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19960 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.928 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.928 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.929 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.929 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:46.930 15:59:51 -- setup/common.sh@33 -- # echo 1024 00:22:46.930 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:46.930 15:59:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:46.930 15:59:51 -- setup/hugepages.sh@112 -- # get_nodes 00:22:46.930 15:59:51 -- setup/hugepages.sh@27 -- # local node 00:22:46.930 15:59:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:46.930 15:59:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:46.930 15:59:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:46.930 15:59:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:46.930 15:59:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:46.930 15:59:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:46.930 15:59:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:46.930 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:46.930 15:59:51 -- setup/common.sh@18 -- # local node=0 00:22:46.930 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:46.930 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:46.930 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:46.930 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:46.930 15:59:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:46.930 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:46.930 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5031372 kB' 'MemUsed: 7214948 kB' 'SwapCached: 0 kB' 'Active: 411764 kB' 'Inactive: 4232640 kB' 'Active(anon): 123808 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4531936 kB' 'Mapped: 57344 kB' 'AnonPages: 141404 kB' 'Shmem: 2592 kB' 'KernelStack: 4992 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # continue 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:46.930 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:46.930 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.189 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.189 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.189 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:47.189 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.189 15:59:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:47.189 15:59:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:47.189 15:59:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:47.189 15:59:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:47.189 node0=1024 expecting 1024 00:22:47.190 15:59:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:47.190 15:59:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:47.190 15:59:51 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:22:47.190 15:59:51 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:22:47.190 15:59:51 -- setup/hugepages.sh@202 -- # setup output 00:22:47.190 15:59:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:47.190 15:59:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:47.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:22:47.496 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:47.496 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:22:47.496 15:59:51 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:22:47.496 15:59:51 -- setup/hugepages.sh@89 -- # local node 00:22:47.496 15:59:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:22:47.496 15:59:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:22:47.496 15:59:51 -- setup/hugepages.sh@92 -- # local surp 00:22:47.496 15:59:51 -- setup/hugepages.sh@93 -- # local resv 00:22:47.496 15:59:51 -- setup/hugepages.sh@94 -- # local anon 00:22:47.496 15:59:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:22:47.496 15:59:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:22:47.496 15:59:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:22:47.496 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:47.496 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:47.496 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:47.496 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:47.496 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:47.496 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:47.496 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:47.496 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5038832 kB' 'MemAvailable: 9418704 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412124 kB' 'Inactive: 4232640 kB' 'Active(anon): 124168 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141980 kB' 'Mapped: 57392 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260324 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79296 kB' 'KernelStack: 4976 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.496 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.496 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:22:47.497 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:47.497 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.497 15:59:51 -- setup/hugepages.sh@97 -- # anon=0 00:22:47.497 15:59:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:22:47.497 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:47.497 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:47.497 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:47.497 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:47.497 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:47.497 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:47.497 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:47.497 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:47.497 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5038832 kB' 'MemAvailable: 9418704 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 412060 kB' 'Inactive: 4232640 kB' 'Active(anon): 124104 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141916 kB' 'Mapped: 57404 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260324 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79296 kB' 'KernelStack: 4912 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.497 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.497 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.498 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.498 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:47.498 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.498 15:59:51 -- setup/hugepages.sh@99 -- # surp=0 00:22:47.498 15:59:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:22:47.498 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:22:47.498 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:47.498 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:47.498 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:47.498 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:47.498 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:47.498 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:47.498 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:47.498 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.498 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5038580 kB' 'MemAvailable: 9418452 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411904 kB' 'Inactive: 4232640 kB' 'Active(anon): 123948 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141512 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 4928 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.499 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.499 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:22:47.500 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:47.500 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.500 15:59:51 -- setup/hugepages.sh@100 -- # resv=0 00:22:47.500 nr_hugepages=1024 00:22:47.500 resv_hugepages=0 00:22:47.500 surplus_hugepages=0 00:22:47.500 anon_hugepages=0 00:22:47.500 15:59:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:22:47.500 15:59:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:22:47.500 15:59:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:22:47.500 15:59:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:22:47.500 15:59:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:47.500 15:59:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:22:47.500 15:59:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:22:47.500 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:22:47.500 15:59:51 -- setup/common.sh@18 -- # local node= 00:22:47.500 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:47.500 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:47.500 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:47.500 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:22:47.500 15:59:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:22:47.500 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:47.500 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5038580 kB' 'MemAvailable: 9418452 kB' 'Buffers: 36216 kB' 'Cached: 4495720 kB' 'SwapCached: 0 kB' 'Active: 411676 kB' 'Inactive: 4232640 kB' 'Active(anon): 123720 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141512 kB' 'Mapped: 57396 kB' 'Shmem: 2592 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'KernelStack: 4928 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 374412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 3000320 kB' 'DirectMap1G: 11534336 kB' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.500 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.500 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.501 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:22:47.501 15:59:51 -- setup/common.sh@33 -- # echo 1024 00:22:47.501 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.501 15:59:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:22:47.501 15:59:51 -- setup/hugepages.sh@112 -- # get_nodes 00:22:47.501 15:59:51 -- setup/hugepages.sh@27 -- # local node 00:22:47.501 15:59:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:22:47.501 15:59:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:22:47.501 15:59:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:22:47.501 15:59:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:22:47.501 15:59:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:22:47.501 15:59:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:22:47.501 15:59:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:22:47.501 15:59:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:22:47.501 15:59:51 -- setup/common.sh@18 -- # local node=0 00:22:47.501 15:59:51 -- setup/common.sh@19 -- # local var val 00:22:47.501 15:59:51 -- setup/common.sh@20 -- # local mem_f mem 00:22:47.501 15:59:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:22:47.501 15:59:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:22:47.501 15:59:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:22:47.501 15:59:51 -- setup/common.sh@28 -- # mapfile -t mem 00:22:47.501 15:59:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.501 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246320 kB' 'MemFree: 5038580 kB' 'MemUsed: 7207740 kB' 'SwapCached: 0 kB' 'Active: 411628 kB' 'Inactive: 4232640 kB' 'Active(anon): 123672 kB' 'Inactive(anon): 0 kB' 'Active(file): 287956 kB' 'Inactive(file): 4232640 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4531936 kB' 'Mapped: 57396 kB' 'AnonPages: 141444 kB' 'Shmem: 2592 kB' 'KernelStack: 4912 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 181028 kB' 'Slab: 260320 kB' 'SReclaimable: 181028 kB' 'SUnreclaim: 79292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # continue 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # IFS=': ' 00:22:47.502 15:59:51 -- setup/common.sh@31 -- # read -r var val _ 00:22:47.502 15:59:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:22:47.502 15:59:51 -- setup/common.sh@33 -- # echo 0 00:22:47.502 15:59:51 -- setup/common.sh@33 -- # return 0 00:22:47.502 15:59:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:22:47.502 15:59:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:22:47.502 15:59:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:22:47.502 15:59:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:22:47.503 node0=1024 expecting 1024 00:22:47.503 15:59:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:22:47.503 15:59:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:22:47.503 00:22:47.503 real 0m1.459s 00:22:47.503 user 0m0.467s 00:22:47.503 sys 0m1.074s 00:22:47.503 15:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.503 ************************************ 00:22:47.503 END TEST no_shrink_alloc 00:22:47.503 ************************************ 00:22:47.503 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.503 15:59:51 -- setup/hugepages.sh@217 -- # clear_hp 00:22:47.503 15:59:51 -- setup/hugepages.sh@37 -- # local node hp 00:22:47.503 15:59:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:22:47.503 15:59:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:47.503 15:59:51 -- setup/hugepages.sh@41 -- # echo 0 00:22:47.503 15:59:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:22:47.503 15:59:51 -- setup/hugepages.sh@41 -- # echo 0 00:22:47.503 15:59:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:22:47.503 15:59:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:22:47.503 00:22:47.503 real 0m7.013s 00:22:47.503 user 0m1.859s 00:22:47.503 sys 0m5.393s 00:22:47.503 15:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.503 ************************************ 00:22:47.503 END TEST hugepages 00:22:47.503 ************************************ 00:22:47.503 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.761 15:59:51 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:22:47.761 15:59:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:47.761 15:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.761 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:47.761 ************************************ 00:22:47.761 START TEST driver 00:22:47.761 ************************************ 00:22:47.761 15:59:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:22:47.761 * Looking for test storage... 00:22:47.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:47.761 15:59:51 -- setup/driver.sh@68 -- # setup reset 00:22:47.761 15:59:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:47.761 15:59:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:48.328 15:59:52 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:22:48.328 15:59:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:48.328 15:59:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:48.328 15:59:52 -- common/autotest_common.sh@10 -- # set +x 00:22:48.328 ************************************ 00:22:48.328 START TEST guess_driver 00:22:48.328 ************************************ 00:22:48.328 15:59:52 -- common/autotest_common.sh@1104 -- # guess_driver 00:22:48.328 15:59:52 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:22:48.328 15:59:52 -- setup/driver.sh@47 -- # local fail=0 00:22:48.328 15:59:52 -- setup/driver.sh@49 -- # pick_driver 00:22:48.328 15:59:52 -- setup/driver.sh@36 -- # vfio 00:22:48.328 15:59:52 -- setup/driver.sh@21 -- # local iommu_grups 00:22:48.328 15:59:52 -- setup/driver.sh@22 -- # local unsafe_vfio 00:22:48.328 15:59:52 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:22:48.328 15:59:52 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:22:48.328 15:59:52 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:22:48.328 15:59:52 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:22:48.328 15:59:52 -- setup/driver.sh@32 -- # return 1 00:22:48.328 15:59:52 -- setup/driver.sh@38 -- # uio 00:22:48.328 15:59:52 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:22:48.328 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:22:48.328 15:59:52 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:22:48.328 Looking for driver=uio_pci_generic 00:22:48.328 15:59:52 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:22:48.328 15:59:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:48.328 15:59:52 -- setup/driver.sh@45 -- # setup output config 00:22:48.328 15:59:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:48.328 15:59:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:48.587 15:59:52 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:22:48.587 15:59:52 -- setup/driver.sh@58 -- # continue 00:22:48.587 15:59:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:48.587 15:59:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:22:48.587 15:59:52 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:22:48.587 15:59:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:22:49.980 15:59:53 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:22:49.980 15:59:53 -- setup/driver.sh@65 -- # setup reset 00:22:49.980 15:59:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:49.980 15:59:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:50.240 00:22:50.240 real 0m2.113s 00:22:50.240 user 0m0.341s 00:22:50.240 sys 0m1.811s 00:22:50.240 15:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.240 15:59:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.240 ************************************ 00:22:50.240 END TEST guess_driver 00:22:50.240 ************************************ 00:22:50.240 00:22:50.240 real 0m2.696s 00:22:50.240 user 0m0.528s 00:22:50.240 sys 0m2.266s 00:22:50.240 15:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.240 15:59:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.240 ************************************ 00:22:50.240 END TEST driver 00:22:50.240 ************************************ 00:22:50.501 15:59:54 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:22:50.501 15:59:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:50.501 15:59:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:50.501 15:59:54 -- common/autotest_common.sh@10 -- # set +x 00:22:50.501 ************************************ 00:22:50.501 START TEST devices 00:22:50.501 ************************************ 00:22:50.501 15:59:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:22:50.501 * Looking for test storage... 00:22:50.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:22:50.501 15:59:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:22:50.501 15:59:54 -- setup/devices.sh@192 -- # setup reset 00:22:50.501 15:59:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:22:50.501 15:59:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:51.070 15:59:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:22:51.070 15:59:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:22:51.070 15:59:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:22:51.070 15:59:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:22:51.070 15:59:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:22:51.070 15:59:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:22:51.070 15:59:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:22:51.070 15:59:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:51.070 15:59:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:22:51.070 15:59:55 -- setup/devices.sh@196 -- # blocks=() 00:22:51.070 15:59:55 -- setup/devices.sh@196 -- # declare -a blocks 00:22:51.070 15:59:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:22:51.070 15:59:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:22:51.070 15:59:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:22:51.070 15:59:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:22:51.070 15:59:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:22:51.070 15:59:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:22:51.070 15:59:55 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:22:51.070 15:59:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:22:51.070 15:59:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:22:51.070 15:59:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:22:51.070 15:59:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:51.070 No valid GPT data, bailing 00:22:51.070 15:59:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:51.070 15:59:55 -- scripts/common.sh@393 -- # pt= 00:22:51.070 15:59:55 -- scripts/common.sh@394 -- # return 1 00:22:51.070 15:59:55 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:22:51.070 15:59:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:22:51.070 15:59:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:51.070 15:59:55 -- setup/common.sh@80 -- # echo 5368709120 00:22:51.070 15:59:55 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:22:51.070 15:59:55 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:22:51.070 15:59:55 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:22:51.070 15:59:55 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:22:51.070 15:59:55 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:22:51.070 15:59:55 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:22:51.070 15:59:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:51.070 15:59:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:51.070 15:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:51.070 ************************************ 00:22:51.070 START TEST nvme_mount 00:22:51.070 ************************************ 00:22:51.070 15:59:55 -- common/autotest_common.sh@1104 -- # nvme_mount 00:22:51.070 15:59:55 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:22:51.070 15:59:55 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:22:51.070 15:59:55 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:51.070 15:59:55 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:51.070 15:59:55 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:22:51.070 15:59:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:22:51.070 15:59:55 -- setup/common.sh@40 -- # local part_no=1 00:22:51.070 15:59:55 -- setup/common.sh@41 -- # local size=1073741824 00:22:51.070 15:59:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:22:51.070 15:59:55 -- setup/common.sh@44 -- # parts=() 00:22:51.070 15:59:55 -- setup/common.sh@44 -- # local parts 00:22:51.070 15:59:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:22:51.070 15:59:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:51.070 15:59:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:51.070 15:59:55 -- setup/common.sh@46 -- # (( part++ )) 00:22:51.070 15:59:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:51.070 15:59:55 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:22:51.070 15:59:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:22:51.070 15:59:55 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:22:52.005 Creating new GPT entries in memory. 00:22:52.005 GPT data structures destroyed! You may now partition the disk using fdisk or 00:22:52.005 other utilities. 00:22:52.005 15:59:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:22:52.005 15:59:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:52.005 15:59:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:52.005 15:59:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:52.005 15:59:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:22:53.377 Creating new GPT entries in memory. 00:22:53.377 The operation has completed successfully. 00:22:53.377 15:59:57 -- setup/common.sh@57 -- # (( part++ )) 00:22:53.377 15:59:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:53.377 15:59:57 -- setup/common.sh@62 -- # wait 55548 00:22:53.377 15:59:57 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:53.377 15:59:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:22:53.377 15:59:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:53.377 15:59:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:22:53.377 15:59:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:22:53.377 15:59:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:53.377 15:59:57 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:53.377 15:59:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:53.377 15:59:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:22:53.377 15:59:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:53.377 15:59:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:53.377 15:59:57 -- setup/devices.sh@53 -- # local found=0 00:22:53.377 15:59:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:53.377 15:59:57 -- setup/devices.sh@56 -- # : 00:22:53.377 15:59:57 -- setup/devices.sh@59 -- # local pci status 00:22:53.377 15:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:53.377 15:59:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:53.377 15:59:57 -- setup/devices.sh@47 -- # setup output config 00:22:53.377 15:59:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:53.377 15:59:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:53.377 15:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:53.377 15:59:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:22:53.377 15:59:57 -- setup/devices.sh@63 -- # found=1 00:22:53.377 15:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:53.377 15:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:53.377 15:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:53.377 15:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:53.377 15:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:54.753 15:59:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:54.753 15:59:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:22:54.753 15:59:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:54.753 15:59:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:54.753 15:59:58 -- setup/devices.sh@110 -- # cleanup_nvme 00:22:54.753 15:59:58 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:58 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:58 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:54.753 15:59:58 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:22:54.753 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:54.753 15:59:58 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:54.753 15:59:58 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:54.753 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:22:54.753 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:22:54.753 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:54.753 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:54.753 15:59:58 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:22:54.753 15:59:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:22:54.753 15:59:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:22:54.753 15:59:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:22:54.753 15:59:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:59 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:54.753 15:59:59 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:54.753 15:59:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:22:54.753 15:59:59 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:54.753 15:59:59 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:54.753 15:59:59 -- setup/devices.sh@53 -- # local found=0 00:22:54.753 15:59:59 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:54.753 15:59:59 -- setup/devices.sh@56 -- # : 00:22:54.753 15:59:59 -- setup/devices.sh@59 -- # local pci status 00:22:54.753 15:59:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:54.753 15:59:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:54.753 15:59:59 -- setup/devices.sh@47 -- # setup output config 00:22:54.753 15:59:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:54.753 15:59:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:55.013 15:59:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:55.013 15:59:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:22:55.013 15:59:59 -- setup/devices.sh@63 -- # found=1 00:22:55.013 15:59:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:55.013 15:59:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:55.013 15:59:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:55.272 15:59:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:55.272 15:59:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:56.661 16:00:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:56.661 16:00:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:22:56.662 16:00:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:56.662 16:00:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:22:56.662 16:00:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:22:56.662 16:00:00 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:56.662 16:00:00 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:22:56.662 16:00:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:22:56.662 16:00:00 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:22:56.662 16:00:00 -- setup/devices.sh@50 -- # local mount_point= 00:22:56.662 16:00:00 -- setup/devices.sh@51 -- # local test_file= 00:22:56.662 16:00:00 -- setup/devices.sh@53 -- # local found=0 00:22:56.662 16:00:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:22:56.662 16:00:00 -- setup/devices.sh@59 -- # local pci status 00:22:56.662 16:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:56.662 16:00:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:22:56.662 16:00:00 -- setup/devices.sh@47 -- # setup output config 00:22:56.662 16:00:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:22:56.662 16:00:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:22:56.662 16:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:56.662 16:00:00 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:22:56.662 16:00:00 -- setup/devices.sh@63 -- # found=1 00:22:56.662 16:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:56.662 16:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:56.662 16:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:56.662 16:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:22:56.662 16:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:22:58.036 16:00:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:22:58.036 16:00:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:22:58.036 16:00:02 -- setup/devices.sh@68 -- # return 0 00:22:58.036 16:00:02 -- setup/devices.sh@128 -- # cleanup_nvme 00:22:58.036 16:00:02 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:22:58.036 16:00:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:22:58.036 16:00:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:22:58.036 16:00:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:22:58.036 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:22:58.036 00:22:58.036 real 0m7.074s 00:22:58.036 user 0m0.462s 00:22:58.036 sys 0m4.409s 00:22:58.036 16:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.036 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:22:58.036 ************************************ 00:22:58.036 END TEST nvme_mount 00:22:58.036 ************************************ 00:22:58.036 16:00:02 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:22:58.036 16:00:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:58.036 16:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:58.036 16:00:02 -- common/autotest_common.sh@10 -- # set +x 00:22:58.293 ************************************ 00:22:58.293 START TEST dm_mount 00:22:58.293 ************************************ 00:22:58.293 16:00:02 -- common/autotest_common.sh@1104 -- # dm_mount 00:22:58.293 16:00:02 -- setup/devices.sh@144 -- # pv=nvme0n1 00:22:58.293 16:00:02 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:22:58.293 16:00:02 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:22:58.293 16:00:02 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:22:58.293 16:00:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:22:58.293 16:00:02 -- setup/common.sh@40 -- # local part_no=2 00:22:58.293 16:00:02 -- setup/common.sh@41 -- # local size=1073741824 00:22:58.293 16:00:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:22:58.293 16:00:02 -- setup/common.sh@44 -- # parts=() 00:22:58.293 16:00:02 -- setup/common.sh@44 -- # local parts 00:22:58.293 16:00:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:22:58.293 16:00:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:58.293 16:00:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:58.293 16:00:02 -- setup/common.sh@46 -- # (( part++ )) 00:22:58.293 16:00:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:58.293 16:00:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:22:58.293 16:00:02 -- setup/common.sh@46 -- # (( part++ )) 00:22:58.294 16:00:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:22:58.294 16:00:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:22:58.294 16:00:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:22:58.294 16:00:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:22:59.262 Creating new GPT entries in memory. 00:22:59.262 GPT data structures destroyed! You may now partition the disk using fdisk or 00:22:59.262 other utilities. 00:22:59.262 16:00:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:22:59.262 16:00:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:22:59.262 16:00:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:22:59.262 16:00:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:22:59.262 16:00:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:23:00.197 Creating new GPT entries in memory. 00:23:00.197 The operation has completed successfully. 00:23:00.197 16:00:04 -- setup/common.sh@57 -- # (( part++ )) 00:23:00.197 16:00:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:23:00.197 16:00:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:23:00.197 16:00:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:23:00.197 16:00:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:23:01.570 The operation has completed successfully. 00:23:01.570 16:00:05 -- setup/common.sh@57 -- # (( part++ )) 00:23:01.570 16:00:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:23:01.570 16:00:05 -- setup/common.sh@62 -- # wait 55998 00:23:01.570 16:00:05 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:23:01.570 16:00:05 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:01.570 16:00:05 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:23:01.570 16:00:05 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:23:01.570 16:00:05 -- setup/devices.sh@160 -- # for t in {1..5} 00:23:01.570 16:00:05 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:23:01.570 16:00:05 -- setup/devices.sh@161 -- # break 00:23:01.570 16:00:05 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:23:01.570 16:00:05 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:23:01.570 16:00:05 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:23:01.570 16:00:05 -- setup/devices.sh@166 -- # dm=dm-0 00:23:01.570 16:00:05 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:23:01.570 16:00:05 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:23:01.570 16:00:05 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:01.570 16:00:05 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:23:01.570 16:00:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:01.570 16:00:05 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:23:01.570 16:00:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:23:01.570 16:00:05 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:01.570 16:00:05 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:23:01.570 16:00:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:23:01.570 16:00:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:23:01.570 16:00:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:01.570 16:00:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:23:01.570 16:00:05 -- setup/devices.sh@53 -- # local found=0 00:23:01.570 16:00:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:23:01.570 16:00:05 -- setup/devices.sh@56 -- # : 00:23:01.570 16:00:05 -- setup/devices.sh@59 -- # local pci status 00:23:01.570 16:00:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:01.570 16:00:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:23:01.570 16:00:05 -- setup/devices.sh@47 -- # setup output config 00:23:01.570 16:00:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:23:01.570 16:00:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:23:01.570 16:00:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:01.570 16:00:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:23:01.570 16:00:05 -- setup/devices.sh@63 -- # found=1 00:23:01.570 16:00:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:01.570 16:00:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:01.570 16:00:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:01.827 16:00:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:01.827 16:00:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:03.210 16:00:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:23:03.210 16:00:07 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:23:03.210 16:00:07 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:03.210 16:00:07 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:23:03.210 16:00:07 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:23:03.210 16:00:07 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:03.210 16:00:07 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:23:03.210 16:00:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:23:03.210 16:00:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:23:03.210 16:00:07 -- setup/devices.sh@50 -- # local mount_point= 00:23:03.210 16:00:07 -- setup/devices.sh@51 -- # local test_file= 00:23:03.210 16:00:07 -- setup/devices.sh@53 -- # local found=0 00:23:03.210 16:00:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:23:03.210 16:00:07 -- setup/devices.sh@59 -- # local pci status 00:23:03.210 16:00:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:03.210 16:00:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:23:03.210 16:00:07 -- setup/devices.sh@47 -- # setup output config 00:23:03.210 16:00:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:23:03.210 16:00:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:23:03.210 16:00:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:03.210 16:00:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:23:03.210 16:00:07 -- setup/devices.sh@63 -- # found=1 00:23:03.210 16:00:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:03.210 16:00:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:03.210 16:00:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:03.468 16:00:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:23:03.468 16:00:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:23:04.851 16:00:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:23:04.851 16:00:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:23:04.851 16:00:08 -- setup/devices.sh@68 -- # return 0 00:23:04.851 16:00:08 -- setup/devices.sh@187 -- # cleanup_dm 00:23:04.851 16:00:08 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:04.851 16:00:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:23:04.851 16:00:08 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:23:04.851 16:00:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:23:04.851 16:00:08 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:23:04.851 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:23:04.851 16:00:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:23:04.851 16:00:08 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:23:04.851 00:23:04.851 real 0m6.530s 00:23:04.851 user 0m0.368s 00:23:04.851 sys 0m3.084s 00:23:04.851 16:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.851 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:23:04.851 ************************************ 00:23:04.851 END TEST dm_mount 00:23:04.851 ************************************ 00:23:04.851 16:00:08 -- setup/devices.sh@1 -- # cleanup 00:23:04.851 16:00:08 -- setup/devices.sh@11 -- # cleanup_nvme 00:23:04.851 16:00:08 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:23:04.851 16:00:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:23:04.852 16:00:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:23:04.852 16:00:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:23:04.852 16:00:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:23:05.108 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:23:05.108 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:23:05.108 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:23:05.108 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:23:05.108 16:00:09 -- setup/devices.sh@12 -- # cleanup_dm 00:23:05.108 16:00:09 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:23:05.108 16:00:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:23:05.108 16:00:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:23:05.108 16:00:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:23:05.108 16:00:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:23:05.108 16:00:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:23:05.108 00:23:05.108 real 0m14.655s 00:23:05.108 user 0m1.140s 00:23:05.108 sys 0m7.996s 00:23:05.108 16:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.108 16:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:05.108 ************************************ 00:23:05.108 END TEST devices 00:23:05.108 ************************************ 00:23:05.108 00:23:05.108 real 0m30.171s 00:23:05.108 user 0m4.705s 00:23:05.108 sys 0m20.444s 00:23:05.108 16:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.108 ************************************ 00:23:05.108 END TEST setup.sh 00:23:05.108 ************************************ 00:23:05.108 16:00:09 -- common/autotest_common.sh@10 -- # set +x 00:23:05.108 16:00:09 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:23:05.367 Hugepages 00:23:05.367 node hugesize free / total 00:23:05.367 node0 1048576kB 0 / 0 00:23:05.367 node0 2048kB 2048 / 2048 00:23:05.367 00:23:05.367 Type BDF Vendor Device NUMA Driver Device Block devices 00:23:05.367 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:23:05.367 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:23:05.626 16:00:09 -- spdk/autotest.sh@141 -- # uname -s 00:23:05.626 16:00:09 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:23:05.626 16:00:09 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:23:05.626 16:00:09 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:05.885 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:05.885 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:23:07.280 16:00:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:23:08.212 16:00:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:23:08.212 16:00:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:23:08.212 16:00:12 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:23:08.212 16:00:12 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:23:08.212 16:00:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:08.212 16:00:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:23:08.212 16:00:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:08.212 16:00:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:08.212 16:00:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:08.470 16:00:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:23:08.470 16:00:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:23:08.470 16:00:12 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:08.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:08.728 Waiting for block devices as requested 00:23:08.728 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:23:08.985 16:00:12 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:23:08.985 16:00:13 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:23:08.985 16:00:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:23:08.985 16:00:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:23:08.985 16:00:13 -- common/autotest_common.sh@1530 -- # grep oacs 00:23:08.985 16:00:13 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:23:08.985 16:00:13 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:23:08.985 16:00:13 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:23:08.985 16:00:13 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:23:08.985 16:00:13 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:23:08.985 16:00:13 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:23:08.985 16:00:13 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:23:08.985 16:00:13 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:23:08.985 16:00:13 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:23:08.985 16:00:13 -- common/autotest_common.sh@1542 -- # continue 00:23:08.985 16:00:13 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:23:08.985 16:00:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:08.985 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:23:08.985 16:00:13 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:23:08.985 16:00:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:08.985 16:00:13 -- common/autotest_common.sh@10 -- # set +x 00:23:08.985 16:00:13 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:09.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:23:09.499 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:23:10.873 16:00:14 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:23:10.873 16:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:10.873 16:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:10.873 16:00:14 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:23:10.873 16:00:14 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:23:10.873 16:00:14 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:23:10.873 16:00:14 -- common/autotest_common.sh@1562 -- # bdfs=() 00:23:10.873 16:00:14 -- common/autotest_common.sh@1562 -- # local bdfs 00:23:10.873 16:00:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:23:10.873 16:00:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:10.873 16:00:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:23:10.873 16:00:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:10.873 16:00:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:10.873 16:00:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:10.873 16:00:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:23:10.874 16:00:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:23:10.874 16:00:15 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:23:10.874 16:00:15 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:23:10.874 16:00:15 -- common/autotest_common.sh@1565 -- # device=0x0010 00:23:10.874 16:00:15 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:23:10.874 16:00:15 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:23:10.874 16:00:15 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:23:10.874 16:00:15 -- common/autotest_common.sh@1578 -- # return 0 00:23:10.874 16:00:15 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:23:10.874 16:00:15 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:23:10.874 16:00:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:10.874 16:00:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:10.874 16:00:15 -- common/autotest_common.sh@10 -- # set +x 00:23:10.874 ************************************ 00:23:10.874 START TEST unittest 00:23:10.874 ************************************ 00:23:10.874 16:00:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:23:10.874 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:23:10.874 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:23:10.874 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:23:10.874 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:23:10.874 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:23:10.874 + rootdir=/home/vagrant/spdk_repo/spdk 00:23:10.874 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:10.874 ++ rpc_py=rpc_cmd 00:23:10.874 ++ set -e 00:23:10.874 ++ shopt -s nullglob 00:23:10.874 ++ shopt -s extglob 00:23:10.874 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:10.874 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:10.874 +++ CONFIG_WPDK_DIR= 00:23:10.874 +++ CONFIG_ASAN=y 00:23:10.874 +++ CONFIG_VBDEV_COMPRESS=n 00:23:10.874 +++ CONFIG_HAVE_EXECINFO_H=y 00:23:10.874 +++ CONFIG_USDT=n 00:23:10.874 +++ CONFIG_CUSTOMOCF=n 00:23:10.874 +++ CONFIG_PREFIX=/usr/local 00:23:10.874 +++ CONFIG_RBD=n 00:23:10.874 +++ CONFIG_LIBDIR= 00:23:10.874 +++ CONFIG_IDXD=y 00:23:10.874 +++ CONFIG_NVME_CUSE=y 00:23:10.874 +++ CONFIG_SMA=n 00:23:10.874 +++ CONFIG_VTUNE=n 00:23:10.874 +++ CONFIG_TSAN=n 00:23:10.874 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:10.874 +++ CONFIG_VFIO_USER_DIR= 00:23:10.874 +++ CONFIG_PGO_CAPTURE=n 00:23:10.874 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:10.874 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:10.874 +++ CONFIG_LTO=n 00:23:10.874 +++ CONFIG_ISCSI_INITIATOR=y 00:23:10.874 +++ CONFIG_CET=n 00:23:10.874 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:10.874 +++ CONFIG_OCF_PATH= 00:23:10.874 +++ CONFIG_RDMA_SET_TOS=y 00:23:10.874 +++ CONFIG_HAVE_ARC4RANDOM=y 00:23:10.874 +++ CONFIG_HAVE_LIBARCHIVE=n 00:23:10.874 +++ CONFIG_UBLK=y 00:23:10.874 +++ CONFIG_ISAL_CRYPTO=y 00:23:10.874 +++ CONFIG_OPENSSL_PATH= 00:23:10.874 +++ CONFIG_OCF=n 00:23:10.874 +++ CONFIG_FUSE=n 00:23:10.874 +++ CONFIG_VTUNE_DIR= 00:23:10.874 +++ CONFIG_FUZZER_LIB= 00:23:10.874 +++ CONFIG_FUZZER=n 00:23:10.874 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:23:10.874 +++ CONFIG_CRYPTO=n 00:23:10.874 +++ CONFIG_PGO_USE=n 00:23:10.874 +++ CONFIG_VHOST=y 00:23:10.874 +++ CONFIG_DAOS=n 00:23:10.874 +++ CONFIG_DPDK_INC_DIR= 00:23:10.874 +++ CONFIG_DAOS_DIR= 00:23:10.874 +++ CONFIG_UNIT_TESTS=y 00:23:10.874 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:10.874 +++ CONFIG_VIRTIO=y 00:23:10.874 +++ CONFIG_COVERAGE=y 00:23:10.874 +++ CONFIG_RDMA=y 00:23:10.874 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:10.874 +++ CONFIG_URING_PATH= 00:23:10.874 +++ CONFIG_XNVME=n 00:23:10.874 +++ CONFIG_VFIO_USER=n 00:23:10.874 +++ CONFIG_ARCH=native 00:23:10.874 +++ CONFIG_URING_ZNS=n 00:23:10.874 +++ CONFIG_WERROR=y 00:23:10.874 +++ CONFIG_HAVE_LIBBSD=n 00:23:10.874 +++ CONFIG_UBSAN=y 00:23:10.874 +++ CONFIG_IPSEC_MB_DIR= 00:23:10.874 +++ CONFIG_GOLANG=n 00:23:10.874 +++ CONFIG_ISAL=y 00:23:10.874 +++ CONFIG_IDXD_KERNEL=y 00:23:10.874 +++ CONFIG_DPDK_LIB_DIR= 00:23:10.874 +++ CONFIG_RDMA_PROV=verbs 00:23:10.874 +++ CONFIG_APPS=y 00:23:10.874 +++ CONFIG_SHARED=n 00:23:10.874 +++ CONFIG_FC_PATH= 00:23:10.874 +++ CONFIG_DPDK_PKG_CONFIG=n 00:23:10.874 +++ CONFIG_FC=n 00:23:10.874 +++ CONFIG_AVAHI=n 00:23:10.874 +++ CONFIG_FIO_PLUGIN=y 00:23:10.874 +++ CONFIG_RAID5F=y 00:23:10.874 +++ CONFIG_EXAMPLES=y 00:23:10.874 +++ CONFIG_TESTS=y 00:23:10.874 +++ CONFIG_CRYPTO_MLX5=n 00:23:10.874 +++ CONFIG_MAX_LCORES= 00:23:10.874 +++ CONFIG_IPSEC_MB=n 00:23:10.874 +++ CONFIG_DEBUG=y 00:23:10.874 +++ CONFIG_DPDK_COMPRESSDEV=n 00:23:10.874 +++ CONFIG_CROSS_PREFIX= 00:23:10.874 +++ CONFIG_URING=n 00:23:10.874 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:10.874 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:10.874 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:10.874 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:10.874 +++ _root=/home/vagrant/spdk_repo/spdk 00:23:10.874 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:10.874 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:10.874 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:10.874 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:10.874 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:10.874 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:10.874 +++ VHOST_APP=("$_app_dir/vhost") 00:23:10.874 +++ DD_APP=("$_app_dir/spdk_dd") 00:23:10.874 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:23:10.874 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:10.874 +++ [[ #ifndef SPDK_CONFIG_H 00:23:10.874 #define SPDK_CONFIG_H 00:23:10.874 #define SPDK_CONFIG_APPS 1 00:23:10.874 #define SPDK_CONFIG_ARCH native 00:23:10.874 #define SPDK_CONFIG_ASAN 1 00:23:10.874 #undef SPDK_CONFIG_AVAHI 00:23:10.874 #undef SPDK_CONFIG_CET 00:23:10.874 #define SPDK_CONFIG_COVERAGE 1 00:23:10.874 #define SPDK_CONFIG_CROSS_PREFIX 00:23:10.874 #undef SPDK_CONFIG_CRYPTO 00:23:10.874 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:10.874 #undef SPDK_CONFIG_CUSTOMOCF 00:23:10.874 #undef SPDK_CONFIG_DAOS 00:23:10.874 #define SPDK_CONFIG_DAOS_DIR 00:23:10.874 #define SPDK_CONFIG_DEBUG 1 00:23:10.874 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:10.874 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:10.874 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:10.874 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:10.874 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:10.874 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:10.874 #define SPDK_CONFIG_EXAMPLES 1 00:23:10.874 #undef SPDK_CONFIG_FC 00:23:10.874 #define SPDK_CONFIG_FC_PATH 00:23:10.874 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:10.874 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:10.874 #undef SPDK_CONFIG_FUSE 00:23:10.874 #undef SPDK_CONFIG_FUZZER 00:23:10.874 #define SPDK_CONFIG_FUZZER_LIB 00:23:10.874 #undef SPDK_CONFIG_GOLANG 00:23:10.874 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:23:10.874 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:10.874 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:10.874 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:10.874 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:10.874 #define SPDK_CONFIG_IDXD 1 00:23:10.874 #define SPDK_CONFIG_IDXD_KERNEL 1 00:23:10.874 #undef SPDK_CONFIG_IPSEC_MB 00:23:10.874 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:10.874 #define SPDK_CONFIG_ISAL 1 00:23:10.874 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:10.874 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:10.874 #define SPDK_CONFIG_LIBDIR 00:23:10.874 #undef SPDK_CONFIG_LTO 00:23:10.874 #define SPDK_CONFIG_MAX_LCORES 00:23:10.874 #define SPDK_CONFIG_NVME_CUSE 1 00:23:10.874 #undef SPDK_CONFIG_OCF 00:23:10.874 #define SPDK_CONFIG_OCF_PATH 00:23:10.874 #define SPDK_CONFIG_OPENSSL_PATH 00:23:10.874 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:10.874 #undef SPDK_CONFIG_PGO_USE 00:23:10.874 #define SPDK_CONFIG_PREFIX /usr/local 00:23:10.874 #define SPDK_CONFIG_RAID5F 1 00:23:10.874 #undef SPDK_CONFIG_RBD 00:23:10.874 #define SPDK_CONFIG_RDMA 1 00:23:10.874 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:10.874 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:10.874 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:10.874 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:10.874 #undef SPDK_CONFIG_SHARED 00:23:10.874 #undef SPDK_CONFIG_SMA 00:23:10.874 #define SPDK_CONFIG_TESTS 1 00:23:10.874 #undef SPDK_CONFIG_TSAN 00:23:10.874 #define SPDK_CONFIG_UBLK 1 00:23:10.874 #define SPDK_CONFIG_UBSAN 1 00:23:10.874 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:10.874 #undef SPDK_CONFIG_URING 00:23:10.874 #define SPDK_CONFIG_URING_PATH 00:23:10.874 #undef SPDK_CONFIG_URING_ZNS 00:23:10.874 #undef SPDK_CONFIG_USDT 00:23:10.874 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:10.874 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:10.874 #undef SPDK_CONFIG_VFIO_USER 00:23:10.874 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:10.874 #define SPDK_CONFIG_VHOST 1 00:23:10.874 #define SPDK_CONFIG_VIRTIO 1 00:23:10.874 #undef SPDK_CONFIG_VTUNE 00:23:10.874 #define SPDK_CONFIG_VTUNE_DIR 00:23:10.874 #define SPDK_CONFIG_WERROR 1 00:23:10.874 #define SPDK_CONFIG_WPDK_DIR 00:23:10.874 #undef SPDK_CONFIG_XNVME 00:23:10.874 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:10.874 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:10.874 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:10.874 +++ [[ -e /bin/wpdk_common.sh ]] 00:23:10.874 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.874 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.875 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:10.875 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:10.875 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:10.875 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:10.875 ++++ export PATH 00:23:10.875 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:10.875 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:10.875 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:10.875 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:10.875 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:10.875 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:10.875 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:10.875 +++ TEST_TAG=N/A 00:23:10.875 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:10.875 ++ : 1 00:23:10.875 ++ export RUN_NIGHTLY 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_RUN_VALGRIND 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_TEST_UNITTEST 00:23:10.875 ++ : 00:23:10.875 ++ export SPDK_TEST_AUTOBUILD 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_RELEASE_BUILD 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ISAL 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ISCSI 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ISCSI_INITIATOR 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME_PMR 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME_BP 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME_CLI 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME_CUSE 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVME_FDP 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVMF 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VFIOUSER 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VFIOUSER_QEMU 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_FUZZER 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_FUZZER_SHORT 00:23:10.875 ++ : rdma 00:23:10.875 ++ export SPDK_TEST_NVMF_TRANSPORT 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_RBD 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VHOST 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_TEST_BLOCKDEV 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_IOAT 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_BLOBFS 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VHOST_INIT 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_LVOL 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VBDEV_COMPRESS 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_RUN_ASAN 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_RUN_UBSAN 00:23:10.875 ++ : 00:23:10.875 ++ export SPDK_RUN_EXTERNAL_DPDK 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_RUN_NON_ROOT 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_CRYPTO 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_FTL 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_OCF 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_VMD 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_OPAL 00:23:10.875 ++ : 00:23:10.875 ++ export SPDK_TEST_NATIVE_DPDK 00:23:10.875 ++ : true 00:23:10.875 ++ export SPDK_AUTOTEST_X 00:23:10.875 ++ : 1 00:23:10.875 ++ export SPDK_TEST_RAID5 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_URING 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_USDT 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_USE_IGB_UIO 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_SCHEDULER 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_SCANBUILD 00:23:10.875 ++ : 00:23:10.875 ++ export SPDK_TEST_NVMF_NICS 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_SMA 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_DAOS 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_XNVME 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ACCEL_DSA 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ACCEL_IAA 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_ACCEL_IOAT 00:23:10.875 ++ : 00:23:10.875 ++ export SPDK_TEST_FUZZER_TARGET 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_TEST_NVMF_MDNS 00:23:10.875 ++ : 0 00:23:10.875 ++ export SPDK_JSONRPC_GO_CLIENT 00:23:10.875 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:10.875 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:10.875 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:10.875 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:10.875 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:10.875 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:10.875 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:10.875 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:10.875 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:10.875 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:23:10.875 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:10.875 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:10.875 ++ export PYTHONDONTWRITEBYTECODE=1 00:23:10.875 ++ PYTHONDONTWRITEBYTECODE=1 00:23:10.875 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:10.875 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:10.875 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:10.875 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:10.875 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:23:10.875 ++ rm -rf /var/tmp/asan_suppression_file 00:23:10.875 ++ cat 00:23:10.875 ++ echo leak:libfuse3.so 00:23:10.875 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:10.875 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:10.875 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:10.875 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:10.875 ++ '[' -z /var/spdk/dependencies ']' 00:23:10.875 ++ export DEPENDENCY_DIR 00:23:10.875 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:10.875 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:10.875 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:10.875 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:10.875 ++ export QEMU_BIN= 00:23:10.875 ++ QEMU_BIN= 00:23:10.875 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:10.876 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:10.876 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:10.876 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:10.876 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:10.876 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:10.876 ++ '[' 0 -eq 0 ']' 00:23:10.876 ++ export valgrind= 00:23:10.876 ++ valgrind= 00:23:10.876 +++ uname -s 00:23:10.876 ++ '[' Linux = Linux ']' 00:23:10.876 ++ HUGEMEM=4096 00:23:10.876 ++ export CLEAR_HUGE=yes 00:23:10.876 ++ CLEAR_HUGE=yes 00:23:10.876 ++ [[ 0 -eq 1 ]] 00:23:10.876 ++ [[ 0 -eq 1 ]] 00:23:10.876 ++ MAKE=make 00:23:10.876 +++ nproc 00:23:10.876 ++ MAKEFLAGS=-j10 00:23:10.876 ++ export HUGEMEM=4096 00:23:10.876 ++ HUGEMEM=4096 00:23:10.876 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:10.876 ++ NO_HUGE=() 00:23:10.876 ++ TEST_MODE= 00:23:10.876 ++ [[ -z '' ]] 00:23:10.876 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:23:10.876 ++ exec 00:23:10.876 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:23:10.876 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:23:10.876 ++ set_test_storage 2147483648 00:23:10.876 ++ [[ -v testdir ]] 00:23:10.876 ++ local requested_size=2147483648 00:23:10.876 ++ local mount target_dir 00:23:10.876 ++ local -A mounts fss sizes avails uses 00:23:10.876 ++ local source fs size avail mount use 00:23:10.876 ++ local storage_fallback storage_candidates 00:23:10.876 +++ mktemp -udt spdk.XXXXXX 00:23:10.876 ++ storage_fallback=/tmp/spdk.0OzQhK 00:23:10.876 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:10.876 ++ [[ -n '' ]] 00:23:10.876 ++ [[ -n '' ]] 00:23:10.876 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.0OzQhK/tests/unit /tmp/spdk.0OzQhK 00:23:10.876 ++ requested_size=2214592512 00:23:10.876 ++ read -r source fs size use avail _ mount 00:23:10.876 +++ df -T 00:23:10.876 +++ grep -v Filesystem 00:23:11.141 ++ mounts["$mount"]=tmpfs 00:23:11.141 ++ fss["$mount"]=tmpfs 00:23:11.141 ++ avails["$mount"]=1252954112 00:23:11.141 ++ sizes["$mount"]=1254023168 00:23:11.141 ++ uses["$mount"]=1069056 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=/dev/vda1 00:23:11.141 ++ fss["$mount"]=ext4 00:23:11.141 ++ avails["$mount"]=10288451584 00:23:11.141 ++ sizes["$mount"]=19681529856 00:23:11.141 ++ uses["$mount"]=9376301056 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=tmpfs 00:23:11.141 ++ fss["$mount"]=tmpfs 00:23:11.141 ++ avails["$mount"]=6270115840 00:23:11.141 ++ sizes["$mount"]=6270115840 00:23:11.141 ++ uses["$mount"]=0 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=tmpfs 00:23:11.141 ++ fss["$mount"]=tmpfs 00:23:11.141 ++ avails["$mount"]=5242880 00:23:11.141 ++ sizes["$mount"]=5242880 00:23:11.141 ++ uses["$mount"]=0 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=/dev/vda16 00:23:11.141 ++ fss["$mount"]=ext4 00:23:11.141 ++ avails["$mount"]=777306112 00:23:11.141 ++ sizes["$mount"]=923156480 00:23:11.141 ++ uses["$mount"]=81207296 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=/dev/vda15 00:23:11.141 ++ fss["$mount"]=vfat 00:23:11.141 ++ avails["$mount"]=103000064 00:23:11.141 ++ sizes["$mount"]=109395968 00:23:11.141 ++ uses["$mount"]=6395904 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=tmpfs 00:23:11.141 ++ fss["$mount"]=tmpfs 00:23:11.141 ++ avails["$mount"]=1254010880 00:23:11.141 ++ sizes["$mount"]=1254023168 00:23:11.141 ++ uses["$mount"]=12288 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt/output 00:23:11.141 ++ fss["$mount"]=fuse.sshfs 00:23:11.141 ++ avails["$mount"]=94269624320 00:23:11.141 ++ sizes["$mount"]=105088212992 00:23:11.141 ++ uses["$mount"]=5433155584 00:23:11.141 ++ read -r source fs size use avail _ mount 00:23:11.141 ++ printf '* Looking for test storage...\n' 00:23:11.141 * Looking for test storage... 00:23:11.141 ++ local target_space new_size 00:23:11.141 ++ for target_dir in "${storage_candidates[@]}" 00:23:11.141 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:23:11.141 +++ awk '$1 !~ /Filesystem/{print $6}' 00:23:11.141 ++ mount=/ 00:23:11.141 ++ target_space=10288451584 00:23:11.141 ++ (( target_space == 0 || target_space < requested_size )) 00:23:11.141 ++ (( target_space >= requested_size )) 00:23:11.141 ++ [[ ext4 == tmpfs ]] 00:23:11.141 ++ [[ ext4 == ramfs ]] 00:23:11.141 ++ [[ / == / ]] 00:23:11.141 ++ new_size=11590893568 00:23:11.141 ++ (( new_size * 100 / sizes[/] > 95 )) 00:23:11.141 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:23:11.141 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:23:11.141 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:23:11.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:23:11.141 ++ return 0 00:23:11.141 ++ set -o errtrace 00:23:11.141 ++ shopt -s extdebug 00:23:11.141 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:23:11.141 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:11.141 16:00:15 -- common/autotest_common.sh@1672 -- # true 00:23:11.141 16:00:15 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:23:11.141 16:00:15 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:23:11.141 16:00:15 -- common/autotest_common.sh@29 -- # exec 00:23:11.141 16:00:15 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:11.141 16:00:15 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:11.141 16:00:15 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:11.141 16:00:15 -- common/autotest_common.sh@18 -- # set -x 00:23:11.141 16:00:15 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:23:11.141 16:00:15 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:23:11.141 16:00:15 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:23:11.141 16:00:15 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:23:11.141 16:00:15 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:23:11.141 16:00:15 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:23:11.141 16:00:15 -- unit/unittest.sh@179 -- # hash lcov 00:23:11.141 16:00:15 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:23:11.141 16:00:15 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:11.141 16:00:15 -- unit/unittest.sh@180 -- # cov_avail=yes 00:23:11.141 16:00:15 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:23:11.141 16:00:15 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:23:11.141 16:00:15 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:23:11.141 16:00:15 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:23:11.141 16:00:15 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:23:11.141 --rc lcov_branch_coverage=1 00:23:11.141 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 ' 00:23:11.141 16:00:15 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:23:11.141 --rc lcov_branch_coverage=1 00:23:11.141 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 ' 00:23:11.141 16:00:15 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:23:11.141 --rc lcov_branch_coverage=1 00:23:11.141 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --no-external' 00:23:11.141 16:00:15 -- unit/unittest.sh@200 -- # LCOV='lcov 00:23:11.141 --rc lcov_branch_coverage=1 00:23:11.141 --rc lcov_function_coverage=1 00:23:11.141 --rc genhtml_branch_coverage=1 00:23:11.141 --rc genhtml_function_coverage=1 00:23:11.141 --rc genhtml_legend=1 00:23:11.141 --rc geninfo_all_blocks=1 00:23:11.141 --no-external' 00:23:11.141 16:00:15 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:23:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:23:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:23:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:23:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:23:26.111 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:23:26.111 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:24:04.906 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:24:04.906 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:24:04.907 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:24:04.907 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:24:04.908 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:24:04.908 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:24:14.879 16:01:18 -- unit/unittest.sh@206 -- # uname -m 00:24:14.879 16:01:18 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:24:14.879 16:01:18 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:24:14.879 16:01:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:14.879 16:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:14.879 16:01:18 -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 ************************************ 00:24:14.879 START TEST unittest_pci_event 00:24:14.879 ************************************ 00:24:14.879 16:01:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:24:14.879 00:24:14.879 00:24:14.879 CUnit - A unit testing framework for C - Version 2.1-3 00:24:14.879 http://cunit.sourceforge.net/ 00:24:14.879 00:24:14.879 00:24:14.879 Suite: pci_event 00:24:14.879 Test: test_pci_parse_event ...[2024-07-22 16:01:19.017805] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:24:14.879 [2024-07-22 16:01:19.018288] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:24:14.879 passed 00:24:14.879 00:24:14.879 Run Summary: Type Total Ran Passed Failed Inactive 00:24:14.879 suites 1 1 n/a 0 0 00:24:14.879 tests 1 1 1 0 0 00:24:14.879 asserts 15 15 15 0 n/a 00:24:14.879 00:24:14.879 Elapsed time = 0.001 seconds 00:24:14.879 00:24:14.879 real 0m0.037s 00:24:14.879 user 0m0.015s 00:24:14.879 sys 0m0.014s 00:24:14.879 16:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.879 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 ************************************ 00:24:14.879 END TEST unittest_pci_event 00:24:14.879 ************************************ 00:24:14.879 16:01:19 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:24:14.879 16:01:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:14.879 16:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:14.879 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 ************************************ 00:24:14.879 START TEST unittest_include 00:24:14.879 ************************************ 00:24:14.879 16:01:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:24:14.879 00:24:14.879 00:24:14.879 CUnit - A unit testing framework for C - Version 2.1-3 00:24:14.879 http://cunit.sourceforge.net/ 00:24:14.879 00:24:14.879 00:24:14.879 Suite: histogram 00:24:14.879 Test: histogram_test ...passed 00:24:14.879 Test: histogram_merge ...passed 00:24:14.879 00:24:14.879 Run Summary: Type Total Ran Passed Failed Inactive 00:24:14.879 suites 1 1 n/a 0 0 00:24:14.879 tests 2 2 2 0 0 00:24:14.879 asserts 50 50 50 0 n/a 00:24:14.879 00:24:14.879 Elapsed time = 0.006 seconds 00:24:14.879 00:24:14.879 real 0m0.036s 00:24:14.879 user 0m0.025s 00:24:14.879 sys 0m0.012s 00:24:14.879 16:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.879 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 ************************************ 00:24:14.879 END TEST unittest_include 00:24:14.879 ************************************ 00:24:15.165 16:01:19 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:24:15.165 16:01:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:15.165 16:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:15.165 16:01:19 -- common/autotest_common.sh@10 -- # set +x 00:24:15.165 ************************************ 00:24:15.165 START TEST unittest_bdev 00:24:15.165 ************************************ 00:24:15.165 16:01:19 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:24:15.165 16:01:19 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:24:15.165 00:24:15.165 00:24:15.165 CUnit - A unit testing framework for C - Version 2.1-3 00:24:15.165 http://cunit.sourceforge.net/ 00:24:15.165 00:24:15.165 00:24:15.165 Suite: bdev 00:24:15.165 Test: bytes_to_blocks_test ...passed 00:24:15.165 Test: num_blocks_test ...passed 00:24:15.165 Test: io_valid_test ...passed 00:24:15.165 Test: open_write_test ...[2024-07-22 16:01:19.246510] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:24:15.165 [2024-07-22 16:01:19.246878] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:24:15.165 [2024-07-22 16:01:19.247043] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:24:15.165 passed 00:24:15.165 Test: claim_test ...passed 00:24:15.165 Test: alias_add_del_test ...[2024-07-22 16:01:19.332460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:24:15.165 [2024-07-22 16:01:19.332586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:24:15.165 [2024-07-22 16:01:19.332649] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:24:15.165 passed 00:24:15.165 Test: get_device_stat_test ...passed 00:24:15.165 Test: bdev_io_types_test ...passed 00:24:15.165 Test: bdev_io_wait_test ...passed 00:24:15.165 Test: bdev_io_spans_split_test ...passed 00:24:15.424 Test: bdev_io_boundary_split_test ...passed 00:24:15.424 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-22 16:01:19.467264] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:24:15.424 passed 00:24:15.424 Test: bdev_io_mix_split_test ...passed 00:24:15.424 Test: bdev_io_split_with_io_wait ...passed 00:24:15.424 Test: bdev_io_write_unit_split_test ...[2024-07-22 16:01:19.551260] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:24:15.424 [2024-07-22 16:01:19.551372] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:24:15.424 [2024-07-22 16:01:19.551400] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:24:15.424 [2024-07-22 16:01:19.551461] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:24:15.424 passed 00:24:15.424 Test: bdev_io_alignment_with_boundary ...passed 00:24:15.424 Test: bdev_io_alignment ...passed 00:24:15.424 Test: bdev_histograms ...passed 00:24:15.424 Test: bdev_write_zeroes ...passed 00:24:15.683 Test: bdev_compare_and_write ...passed 00:24:15.683 Test: bdev_compare ...passed 00:24:15.683 Test: bdev_compare_emulated ...passed 00:24:15.683 Test: bdev_zcopy_write ...passed 00:24:15.683 Test: bdev_zcopy_read ...passed 00:24:15.683 Test: bdev_open_while_hotremove ...passed 00:24:15.683 Test: bdev_close_while_hotremove ...passed 00:24:15.683 Test: bdev_open_ext_test ...passed 00:24:15.683 Test: bdev_open_ext_unregister ...[2024-07-22 16:01:19.881964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:24:15.683 passed 00:24:15.683 Test: bdev_set_io_timeout ...[2024-07-22 16:01:19.882178] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:24:15.683 passed 00:24:15.683 Test: bdev_set_qd_sampling ...passed 00:24:15.683 Test: lba_range_overlap ...passed 00:24:15.941 Test: lock_lba_range_check_ranges ...passed 00:24:15.941 Test: lock_lba_range_with_io_outstanding ...passed 00:24:15.941 Test: lock_lba_range_overlapped ...passed 00:24:15.941 Test: bdev_quiesce ...[2024-07-22 16:01:20.021074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:24:15.941 passed 00:24:15.941 Test: bdev_io_abort ...passed 00:24:15.941 Test: bdev_unmap ...passed 00:24:15.941 Test: bdev_write_zeroes_split_test ...passed 00:24:15.941 Test: bdev_set_options_test ...[2024-07-22 16:01:20.127047] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:24:15.941 passed 00:24:15.941 Test: bdev_get_memory_domains ...passed 00:24:15.941 Test: bdev_io_ext ...passed 00:24:15.941 Test: bdev_io_ext_no_opts ...passed 00:24:15.941 Test: bdev_io_ext_invalid_opts ...passed 00:24:16.201 Test: bdev_io_ext_split ...passed 00:24:16.201 Test: bdev_io_ext_bounce_buffer ...passed 00:24:16.201 Test: bdev_register_uuid_alias ...[2024-07-22 16:01:20.268257] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 19ddc7dc-28f8-4164-8dc1-b36128002a1f already exists 00:24:16.201 [2024-07-22 16:01:20.268337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:19ddc7dc-28f8-4164-8dc1-b36128002a1f alias for bdev bdev0 00:24:16.201 passed 00:24:16.201 Test: bdev_unregister_by_name ...[2024-07-22 16:01:20.289186] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:24:16.201 passed 00:24:16.201 Test: for_each_bdev_test ...[2024-07-22 16:01:20.289243] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:24:16.201 passed 00:24:16.201 Test: bdev_seek_test ...passed 00:24:16.201 Test: bdev_copy ...passed 00:24:16.201 Test: bdev_copy_split_test ...passed 00:24:16.201 Test: examine_locks ...passed 00:24:16.201 Test: claim_v2_rwo ...[2024-07-22 16:01:20.373509] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.201 passed 00:24:16.201 Test: claim_v2_rom ...[2024-07-22 16:01:20.373596] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373620] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373637] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373667] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373697] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:24:16.201 [2024-07-22 16:01:20.373912] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373953] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.373972] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374000] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374049] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:24:16.201 passed 00:24:16.201 Test: claim_v2_rwm ...[2024-07-22 16:01:20.374088] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:24:16.201 [2024-07-22 16:01:20.374199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:24:16.201 [2024-07-22 16:01:20.374239] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:24:16.201 passed 00:24:16.201 Test: claim_v2_existing_writer ...[2024-07-22 16:01:20.374300] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374315] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:24:16.201 passed 00:24:16.201 Test: claim_v2_existing_v1 ...[2024-07-22 16:01:20.374498] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:24:16.201 [2024-07-22 16:01:20.374538] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:24:16.201 [2024-07-22 16:01:20.374643] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374670] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:24:16.201 [2024-07-22 16:01:20.374683] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:24:16.201 passed 00:24:16.201 Test: claim_v1_existing_v2 ...[2024-07-22 16:01:20.374789] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:24:16.202 [2024-07-22 16:01:20.374827] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:24:16.202 [2024-07-22 16:01:20.374857] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:24:16.202 passed 00:24:16.202 Test: examine_claimed ...[2024-07-22 16:01:20.375165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:24:16.202 passed 00:24:16.202 00:24:16.202 Run Summary: Type Total Ran Passed Failed Inactive 00:24:16.202 suites 1 1 n/a 0 0 00:24:16.202 tests 59 59 59 0 0 00:24:16.202 asserts 4599 4599 4599 0 n/a 00:24:16.202 00:24:16.202 Elapsed time = 1.173 seconds 00:24:16.202 16:01:20 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:24:16.202 00:24:16.202 00:24:16.202 CUnit - A unit testing framework for C - Version 2.1-3 00:24:16.202 http://cunit.sourceforge.net/ 00:24:16.202 00:24:16.202 00:24:16.202 Suite: nvme 00:24:16.202 Test: test_create_ctrlr ...passed 00:24:16.202 Test: test_reset_ctrlr ...[2024-07-22 16:01:20.426929] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:24:16.202 Test: test_failover_ctrlr ...passed 00:24:16.202 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-22 16:01:20.429620] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.429847] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.430105] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_pending_reset ...[2024-07-22 16:01:20.431768] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.432077] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_attach_ctrlr ...[2024-07-22 16:01:20.433192] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:16.202 passed 00:24:16.202 Test: test_aer_cb ...passed 00:24:16.202 Test: test_submit_nvme_cmd ...passed 00:24:16.202 Test: test_add_remove_trid ...passed 00:24:16.202 Test: test_abort ...[2024-07-22 16:01:20.436480] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:24:16.202 passed 00:24:16.202 Test: test_get_io_qpair ...passed 00:24:16.202 Test: test_bdev_unregister ...passed 00:24:16.202 Test: test_compare_ns ...passed 00:24:16.202 Test: test_init_ana_log_page ...passed 00:24:16.202 Test: test_get_memory_domains ...passed 00:24:16.202 Test: test_reconnect_qpair ...[2024-07-22 16:01:20.439167] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_create_bdev_ctrlr ...[2024-07-22 16:01:20.439671] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:24:16.202 passed 00:24:16.202 Test: test_add_multi_ns_to_bdev ...[2024-07-22 16:01:20.440944] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:24:16.202 passed 00:24:16.202 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:24:16.202 Test: test_admin_path ...passed 00:24:16.202 Test: test_reset_bdev_ctrlr ...passed 00:24:16.202 Test: test_find_io_path ...passed 00:24:16.202 Test: test_retry_io_if_ana_state_is_updating ...passed 00:24:16.202 Test: test_retry_io_for_io_path_error ...passed 00:24:16.202 Test: test_retry_io_count ...passed 00:24:16.202 Test: test_concurrent_read_ana_log_page ...passed 00:24:16.202 Test: test_retry_io_for_ana_error ...passed 00:24:16.202 Test: test_check_io_error_resiliency_params ...[2024-07-22 16:01:20.447417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:24:16.202 [2024-07-22 16:01:20.447472] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:24:16.202 [2024-07-22 16:01:20.447502] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:24:16.202 [2024-07-22 16:01:20.447521] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:24:16.202 [2024-07-22 16:01:20.447537] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:24:16.202 [2024-07-22 16:01:20.447566] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:24:16.202 [2024-07-22 16:01:20.447581] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:24:16.202 passed 00:24:16.202 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-22 16:01:20.447599] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:24:16.202 [2024-07-22 16:01:20.447620] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:24:16.202 passed 00:24:16.202 Test: test_reconnect_ctrlr ...[2024-07-22 16:01:20.448337] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.448440] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.448702] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.448825] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.448926] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_retry_failover_ctrlr ...[2024-07-22 16:01:20.449322] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_fail_path ...[2024-07-22 16:01:20.449870] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.450027] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.450140] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.450348] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.450476] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_nvme_ns_cmp ...passed 00:24:16.202 Test: test_ana_transition ...passed 00:24:16.202 Test: test_set_preferred_path ...passed 00:24:16.202 Test: test_find_next_io_path ...passed 00:24:16.202 Test: test_find_io_path_min_qd ...passed 00:24:16.202 Test: test_disable_auto_failback ...[2024-07-22 16:01:20.452141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_set_multipath_policy ...passed 00:24:16.202 Test: test_uuid_generation ...passed 00:24:16.202 Test: test_retry_io_to_same_path ...passed 00:24:16.202 Test: test_race_between_reset_and_disconnected ...passed 00:24:16.202 Test: test_ctrlr_op_rpc ...passed 00:24:16.202 Test: test_bdev_ctrlr_op_rpc ...passed 00:24:16.202 Test: test_disable_enable_ctrlr ...[2024-07-22 16:01:20.455689] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 [2024-07-22 16:01:20.455858] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:16.202 passed 00:24:16.202 Test: test_delete_ctrlr_done ...passed 00:24:16.202 Test: test_ns_remove_during_reset ...passed 00:24:16.202 00:24:16.202 Run Summary: Type Total Ran Passed Failed Inactive 00:24:16.202 suites 1 1 n/a 0 0 00:24:16.202 tests 48 48 48 0 0 00:24:16.202 asserts 3553 3553 3553 0 n/a 00:24:16.202 00:24:16.202 Elapsed time = 0.031 seconds 00:24:16.460 16:01:20 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:24:16.460 Test Options 00:24:16.460 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:24:16.460 00:24:16.460 00:24:16.460 CUnit - A unit testing framework for C - Version 2.1-3 00:24:16.460 http://cunit.sourceforge.net/ 00:24:16.460 00:24:16.460 00:24:16.460 Suite: raid 00:24:16.460 Test: test_create_raid ...passed 00:24:16.460 Test: test_create_raid_superblock ...passed 00:24:16.460 Test: test_delete_raid ...passed 00:24:16.460 Test: test_create_raid_invalid_args ...[2024-07-22 16:01:20.507587] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:24:16.460 [2024-07-22 16:01:20.507899] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:24:16.460 [2024-07-22 16:01:20.508405] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:24:16.460 [2024-07-22 16:01:20.508572] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:24:16.460 [2024-07-22 16:01:20.509202] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:24:16.460 passed 00:24:16.460 Test: test_delete_raid_invalid_args ...passed 00:24:16.460 Test: test_io_channel ...passed 00:24:16.460 Test: test_reset_io ...passed 00:24:16.460 Test: test_write_io ...passed 00:24:16.460 Test: test_read_io ...passed 00:24:17.025 Test: test_unmap_io ...passed 00:24:17.025 Test: test_io_failure ...[2024-07-22 16:01:21.250984] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:24:17.025 passed 00:24:17.025 Test: test_multi_raid_no_io ...passed 00:24:17.025 Test: test_multi_raid_with_io ...passed 00:24:17.025 Test: test_io_type_supported ...passed 00:24:17.025 Test: test_raid_json_dump_info ...passed 00:24:17.025 Test: test_context_size ...passed 00:24:17.025 Test: test_raid_level_conversions ...passed 00:24:17.025 Test: test_raid_process ...passed 00:24:17.025 Test: test_raid_io_split ...passed 00:24:17.025 00:24:17.025 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.025 suites 1 1 n/a 0 0 00:24:17.025 tests 19 19 19 0 0 00:24:17.025 asserts 177879 177879 177879 0 n/a 00:24:17.025 00:24:17.025 Elapsed time = 0.755 seconds 00:24:17.025 16:01:21 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: raid_sb 00:24:17.284 Test: test_raid_bdev_write_superblock ...passed 00:24:17.284 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:24:17.284 Test: test_raid_bdev_parse_superblock ...[2024-07-22 16:01:21.300943] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:24:17.284 passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 3 3 3 0 0 00:24:17.284 asserts 32 32 32 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.001 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: concat 00:24:17.284 Test: test_concat_start ...passed 00:24:17.284 Test: test_concat_rw ...passed 00:24:17.284 Test: test_concat_null_payload ...passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 3 3 3 0 0 00:24:17.284 asserts 8097 8097 8097 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.007 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: raid1 00:24:17.284 Test: test_raid1_start ...passed 00:24:17.284 Test: test_raid1_read_balancing ...passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 2 2 2 0 0 00:24:17.284 asserts 2856 2856 2856 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.007 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: zone 00:24:17.284 Test: test_zone_get_operation ...passed 00:24:17.284 Test: test_bdev_zone_get_info ...passed 00:24:17.284 Test: test_bdev_zone_management ...passed 00:24:17.284 Test: test_bdev_zone_append ...passed 00:24:17.284 Test: test_bdev_zone_append_with_md ...passed 00:24:17.284 Test: test_bdev_zone_appendv ...passed 00:24:17.284 Test: test_bdev_zone_appendv_with_md ...passed 00:24:17.284 Test: test_bdev_io_get_append_location ...passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 8 8 8 0 0 00:24:17.284 asserts 94 94 94 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.001 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: gpt_parse 00:24:17.284 Test: test_parse_mbr_and_primary ...[2024-07-22 16:01:21.460403] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:24:17.284 [2024-07-22 16:01:21.460629] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:24:17.284 [2024-07-22 16:01:21.460721] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:24:17.284 [2024-07-22 16:01:21.460750] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:24:17.284 [2024-07-22 16:01:21.460808] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:24:17.284 [2024-07-22 16:01:21.460833] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:24:17.284 passed 00:24:17.284 Test: test_parse_secondary ...[2024-07-22 16:01:21.461522] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:24:17.284 [2024-07-22 16:01:21.461547] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:24:17.284 [2024-07-22 16:01:21.461577] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:24:17.284 [2024-07-22 16:01:21.461603] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:24:17.284 passed 00:24:17.284 Test: test_check_mbr ...[2024-07-22 16:01:21.462254] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:24:17.284 passed 00:24:17.284 Test: test_read_header ...[2024-07-22 16:01:21.462289] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:24:17.284 [2024-07-22 16:01:21.462394] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:24:17.284 [2024-07-22 16:01:21.462426] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:24:17.284 passed 00:24:17.284 Test: test_read_partitions ...[2024-07-22 16:01:21.462465] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:24:17.284 [2024-07-22 16:01:21.462498] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:24:17.284 [2024-07-22 16:01:21.462540] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:24:17.284 [2024-07-22 16:01:21.462565] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:24:17.284 [2024-07-22 16:01:21.462647] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:24:17.284 [2024-07-22 16:01:21.462674] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:24:17.284 [2024-07-22 16:01:21.462710] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:24:17.284 [2024-07-22 16:01:21.462735] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:24:17.284 [2024-07-22 16:01:21.463037] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:24:17.284 passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 5 5 5 0 0 00:24:17.284 asserts 33 33 33 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.003 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:24:17.284 00:24:17.284 00:24:17.284 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.284 http://cunit.sourceforge.net/ 00:24:17.284 00:24:17.284 00:24:17.284 Suite: bdev_part 00:24:17.284 Test: part_test ...[2024-07-22 16:01:21.498290] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:24:17.284 passed 00:24:17.284 Test: part_free_test ...passed 00:24:17.284 Test: part_get_io_channel_test ...passed 00:24:17.284 Test: part_construct_ext ...passed 00:24:17.284 00:24:17.284 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.284 suites 1 1 n/a 0 0 00:24:17.284 tests 4 4 4 0 0 00:24:17.284 asserts 48 48 48 0 n/a 00:24:17.284 00:24:17.284 Elapsed time = 0.035 seconds 00:24:17.284 16:01:21 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:24:17.543 00:24:17.543 00:24:17.543 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.543 http://cunit.sourceforge.net/ 00:24:17.543 00:24:17.543 00:24:17.543 Suite: scsi_nvme_suite 00:24:17.543 Test: scsi_nvme_translate_test ...passed 00:24:17.543 00:24:17.543 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.543 suites 1 1 n/a 0 0 00:24:17.543 tests 1 1 1 0 0 00:24:17.543 asserts 104 104 104 0 n/a 00:24:17.543 00:24:17.543 Elapsed time = 0.000 seconds 00:24:17.543 16:01:21 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:24:17.543 00:24:17.543 00:24:17.543 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.543 http://cunit.sourceforge.net/ 00:24:17.543 00:24:17.543 00:24:17.543 Suite: lvol 00:24:17.543 Test: ut_lvs_init ...[2024-07-22 16:01:21.607178] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:24:17.543 [2024-07-22 16:01:21.607786] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:24:17.543 passed 00:24:17.543 Test: ut_lvol_init ...passed 00:24:17.543 Test: ut_lvol_snapshot ...passed 00:24:17.543 Test: ut_lvol_clone ...passed 00:24:17.543 Test: ut_lvs_destroy ...passed 00:24:17.543 Test: ut_lvs_unload ...passed 00:24:17.543 Test: ut_lvol_resize ...[2024-07-22 16:01:21.610482] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:24:17.543 passed 00:24:17.543 Test: ut_lvol_set_read_only ...passed 00:24:17.543 Test: ut_lvol_hotremove ...passed 00:24:17.543 Test: ut_vbdev_lvol_get_io_channel ...passed 00:24:17.543 Test: ut_vbdev_lvol_io_type_supported ...passed 00:24:17.543 Test: ut_lvol_read_write ...passed 00:24:17.543 Test: ut_vbdev_lvol_submit_request ...passed 00:24:17.543 Test: ut_lvol_examine_config ...passed 00:24:17.543 Test: ut_lvol_examine_disk ...[2024-07-22 16:01:21.611631] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:24:17.543 passed 00:24:17.543 Test: ut_lvol_rename ...[2024-07-22 16:01:21.613184] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:24:17.543 [2024-07-22 16:01:21.613279] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:24:17.543 passed 00:24:17.543 Test: ut_bdev_finish ...passed 00:24:17.543 Test: ut_lvs_rename ...passed 00:24:17.543 Test: ut_lvol_seek ...passed 00:24:17.543 Test: ut_esnap_dev_create ...[2024-07-22 16:01:21.614303] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:24:17.543 [2024-07-22 16:01:21.614361] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:24:17.543 [2024-07-22 16:01:21.614436] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:24:17.543 passed 00:24:17.543 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-22 16:01:21.614498] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:24:17.543 [2024-07-22 16:01:21.614736] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:24:17.543 [2024-07-22 16:01:21.614813] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:24:17.543 passed 00:24:17.543 00:24:17.543 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.543 suites 1 1 n/a 0 0 00:24:17.543 tests 21 21 21 0 0 00:24:17.543 asserts 712 712 712 0 n/a 00:24:17.543 00:24:17.543 Elapsed time = 0.008 seconds 00:24:17.543 16:01:21 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:24:17.543 00:24:17.543 00:24:17.543 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.543 http://cunit.sourceforge.net/ 00:24:17.543 00:24:17.543 00:24:17.543 Suite: zone_block 00:24:17.543 Test: test_zone_block_create ...passed 00:24:17.543 Test: test_zone_block_create_invalid ...[2024-07-22 16:01:21.685984] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:24:17.543 [2024-07-22 16:01:21.686364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-22 16:01:21.686503] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:24:17.543 [2024-07-22 16:01:21.686557] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-22 16:01:21.686718] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:24:17.543 [2024-07-22 16:01:21.686764] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-22 16:01:21.686864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:24:17.543 [2024-07-22 16:01:21.686904] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:24:17.543 Test: test_get_zone_info ...[2024-07-22 16:01:21.687917] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.543 [2024-07-22 16:01:21.687982] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.543 [2024-07-22 16:01:21.688068] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.543 passed 00:24:17.543 Test: test_supported_io_types ...passed 00:24:17.543 Test: test_reset_zone ...[2024-07-22 16:01:21.689006] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.543 [2024-07-22 16:01:21.689089] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.543 passed 00:24:17.543 Test: test_open_zone ...[2024-07-22 16:01:21.689521] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.690293] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.690356] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 Test: test_zone_write ...[2024-07-22 16:01:21.690800] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:24:17.544 [2024-07-22 16:01:21.690841] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.690892] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:24:17.544 [2024-07-22 16:01:21.690914] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.697494] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:24:17.544 [2024-07-22 16:01:21.697558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.697661] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:24:17.544 [2024-07-22 16:01:21.697689] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.704226] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:24:17.544 [2024-07-22 16:01:21.704310] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 Test: test_zone_read ...[2024-07-22 16:01:21.704745] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:24:17.544 [2024-07-22 16:01:21.704783] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.704854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:24:17.544 [2024-07-22 16:01:21.704875] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.705364] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:24:17.544 [2024-07-22 16:01:21.705420] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 Test: test_close_zone ...[2024-07-22 16:01:21.705741] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.705831] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.706069] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 Test: test_finish_zone ...[2024-07-22 16:01:21.706122] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.706697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.706796] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 Test: test_append_zone ...[2024-07-22 16:01:21.707172] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:24:17.544 [2024-07-22 16:01:21.707209] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.707251] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:24:17.544 [2024-07-22 16:01:21.707273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 [2024-07-22 16:01:21.720116] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:24:17.544 [2024-07-22 16:01:21.720185] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:24:17.544 passed 00:24:17.544 00:24:17.544 Run Summary: Type Total Ran Passed Failed Inactive 00:24:17.544 suites 1 1 n/a 0 0 00:24:17.544 tests 11 11 11 0 0 00:24:17.544 asserts 3437 3437 3437 0 n/a 00:24:17.544 00:24:17.544 Elapsed time = 0.036 seconds 00:24:17.544 16:01:21 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:24:17.544 00:24:17.544 00:24:17.544 CUnit - A unit testing framework for C - Version 2.1-3 00:24:17.544 http://cunit.sourceforge.net/ 00:24:17.544 00:24:17.544 00:24:17.544 Suite: bdev 00:24:17.802 Test: basic ...[2024-07-22 16:01:21.831881] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x57f62e10fec1): Operation not permitted (rc=-1) 00:24:17.802 [2024-07-22 16:01:21.832495] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x57f62e10fe80): Operation not permitted (rc=-1) 00:24:17.802 [2024-07-22 16:01:21.832638] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x57f62e10fec1): Operation not permitted (rc=-1) 00:24:17.802 passed 00:24:17.802 Test: unregister_and_close ...passed 00:24:17.802 Test: unregister_and_close_different_threads ...passed 00:24:17.802 Test: basic_qos ...passed 00:24:17.802 Test: put_channel_during_reset ...passed 00:24:17.802 Test: aborted_reset ...passed 00:24:17.802 Test: aborted_reset_no_outstanding_io ...passed 00:24:18.061 Test: io_during_reset ...passed 00:24:18.061 Test: reset_completions ...passed 00:24:18.061 Test: io_during_qos_queue ...passed 00:24:18.061 Test: io_during_qos_reset ...passed 00:24:18.061 Test: enomem ...passed 00:24:18.061 Test: enomem_multi_bdev ...passed 00:24:18.061 Test: enomem_multi_bdev_unregister ...passed 00:24:18.061 Test: enomem_multi_io_target ...passed 00:24:18.355 Test: qos_dynamic_enable ...passed 00:24:18.355 Test: bdev_histograms_mt ...passed 00:24:18.355 Test: bdev_set_io_timeout_mt ...[2024-07-22 16:01:22.433285] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:24:18.355 passed 00:24:18.355 Test: lock_lba_range_then_submit_io ...[2024-07-22 16:01:22.440755] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x57f62e10fe40 already registered (old:0x5130000003c0 new:0x513000000c80) 00:24:18.355 passed 00:24:18.355 Test: unregister_during_reset ...passed 00:24:18.355 Test: event_notify_and_close ...passed 00:24:18.355 Test: unregister_and_qos_poller ...passed 00:24:18.355 Suite: bdev_wrong_thread 00:24:18.355 Test: spdk_bdev_register_wt ...[2024-07-22 16:01:22.563409] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x518000001480 (0x518000001480) 00:24:18.355 passed 00:24:18.355 Test: spdk_bdev_examine_wt ...[2024-07-22 16:01:22.563702] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x518000001480 (0x518000001480) 00:24:18.355 passed 00:24:18.355 00:24:18.355 Run Summary: Type Total Ran Passed Failed Inactive 00:24:18.355 suites 2 2 n/a 0 0 00:24:18.355 tests 24 24 24 0 0 00:24:18.355 asserts 621 621 621 0 n/a 00:24:18.355 00:24:18.355 Elapsed time = 0.752 seconds 00:24:18.355 00:24:18.355 real 0m3.412s 00:24:18.355 user 0m1.330s 00:24:18.355 sys 0m2.084s 00:24:18.355 16:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.355 16:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.355 ************************************ 00:24:18.355 END TEST unittest_bdev 00:24:18.355 ************************************ 00:24:18.613 16:01:22 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:24:18.613 16:01:22 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:24:18.613 16:01:22 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:24:18.613 16:01:22 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:24:18.613 16:01:22 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:24:18.613 16:01:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:18.613 16:01:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:18.613 16:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 ************************************ 00:24:18.613 START TEST unittest_bdev_raid5f 00:24:18.613 ************************************ 00:24:18.613 16:01:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:24:18.613 00:24:18.613 00:24:18.613 CUnit - A unit testing framework for C - Version 2.1-3 00:24:18.613 http://cunit.sourceforge.net/ 00:24:18.613 00:24:18.613 00:24:18.613 Suite: raid5f 00:24:18.613 Test: test_raid5f_start ...passed 00:24:19.179 Test: test_raid5f_submit_read_request ...passed 00:24:19.437 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:24:23.623 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:24:41.735 Test: test_raid5f_chunk_write_error ...passed 00:24:49.867 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:24:53.149 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:25:25.240 Test: test_raid5f_submit_read_request_degraded ...passed 00:25:25.240 00:25:25.240 Run Summary: Type Total Ran Passed Failed Inactive 00:25:25.240 suites 1 1 n/a 0 0 00:25:25.240 tests 8 8 8 0 0 00:25:25.240 asserts 351864 351864 351864 0 n/a 00:25:25.240 00:25:25.240 Elapsed time = 63.003 seconds 00:25:25.240 00:25:25.240 real 1m3.106s 00:25:25.240 user 0m59.341s 00:25:25.240 sys 0m3.743s 00:25:25.240 16:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.240 16:02:25 -- common/autotest_common.sh@10 -- # set +x 00:25:25.240 ************************************ 00:25:25.240 END TEST unittest_bdev_raid5f 00:25:25.240 ************************************ 00:25:25.240 16:02:25 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:25:25.240 16:02:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:25.240 16:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:25.240 16:02:25 -- common/autotest_common.sh@10 -- # set +x 00:25:25.240 ************************************ 00:25:25.240 START TEST unittest_blob_blobfs 00:25:25.240 ************************************ 00:25:25.240 16:02:25 -- common/autotest_common.sh@1104 -- # unittest_blob 00:25:25.240 16:02:25 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:25:25.240 16:02:25 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:25:25.240 00:25:25.240 00:25:25.240 CUnit - A unit testing framework for C - Version 2.1-3 00:25:25.240 http://cunit.sourceforge.net/ 00:25:25.240 00:25:25.240 00:25:25.240 Suite: blob_nocopy_noextent 00:25:25.240 Test: blob_init ...[2024-07-22 16:02:25.835430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:25:25.240 passed 00:25:25.240 Test: blob_thin_provision ...passed 00:25:25.240 Test: blob_read_only ...passed 00:25:25.240 Test: bs_load ...[2024-07-22 16:02:25.937117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:25:25.240 passed 00:25:25.240 Test: bs_load_custom_cluster_size ...passed 00:25:25.240 Test: bs_load_after_failed_grow ...passed 00:25:25.240 Test: bs_cluster_sz ...[2024-07-22 16:02:25.971256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:25:25.240 [2024-07-22 16:02:25.971676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:25:25.240 [2024-07-22 16:02:25.971766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:25:25.240 passed 00:25:25.240 Test: bs_resize_md ...passed 00:25:25.240 Test: bs_destroy ...passed 00:25:25.240 Test: bs_type ...passed 00:25:25.240 Test: bs_super_block ...passed 00:25:25.240 Test: bs_test_recover_cluster_count ...passed 00:25:25.240 Test: bs_grow_live ...passed 00:25:25.240 Test: bs_grow_live_no_space ...passed 00:25:25.240 Test: bs_test_grow ...passed 00:25:25.240 Test: blob_serialize_test ...passed 00:25:25.240 Test: super_block_crc ...passed 00:25:25.240 Test: blob_thin_prov_write_count_io ...passed 00:25:25.240 Test: bs_load_iter_test ...passed 00:25:25.240 Test: blob_relations ...[2024-07-22 16:02:26.167716] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.167871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 [2024-07-22 16:02:26.168918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.169015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 passed 00:25:25.240 Test: blob_relations2 ...[2024-07-22 16:02:26.186558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.186644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 [2024-07-22 16:02:26.186681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.186698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 [2024-07-22 16:02:26.188365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.188422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 [2024-07-22 16:02:26.188886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:25.240 [2024-07-22 16:02:26.188933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.240 passed 00:25:25.240 Test: blob_relations3 ...passed 00:25:25.240 Test: blobstore_clean_power_failure ...passed 00:25:25.240 Test: blob_delete_snapshot_power_failure ...[2024-07-22 16:02:26.403293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:25.240 [2024-07-22 16:02:26.418381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:25.240 [2024-07-22 16:02:26.418463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:25.241 [2024-07-22 16:02:26.418495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:26.433379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:25.241 [2024-07-22 16:02:26.433448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:25.241 [2024-07-22 16:02:26.433478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:25.241 [2024-07-22 16:02:26.433506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:26.448593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:25:25.241 [2024-07-22 16:02:26.448727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:26.463901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:25:25.241 [2024-07-22 16:02:26.464069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:26.479020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:25:25.241 [2024-07-22 16:02:26.479134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 passed 00:25:25.241 Test: blob_create_snapshot_power_failure ...[2024-07-22 16:02:26.524878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:25.241 [2024-07-22 16:02:26.553943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:25.241 [2024-07-22 16:02:26.569087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:25:25.241 passed 00:25:25.241 Test: blob_io_unit ...passed 00:25:25.241 Test: blob_io_unit_compatibility ...passed 00:25:25.241 Test: blob_ext_md_pages ...passed 00:25:25.241 Test: blob_esnap_io_4096_4096 ...passed 00:25:25.241 Test: blob_esnap_io_512_512 ...passed 00:25:25.241 Test: blob_esnap_io_4096_512 ...passed 00:25:25.241 Test: blob_esnap_io_512_4096 ...passed 00:25:25.241 Suite: blob_bs_nocopy_noextent 00:25:25.241 Test: blob_open ...passed 00:25:25.241 Test: blob_create ...[2024-07-22 16:02:26.863351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:25:25.241 passed 00:25:25.241 Test: blob_create_loop ...passed 00:25:25.241 Test: blob_create_fail ...[2024-07-22 16:02:26.980481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:25.241 passed 00:25:25.241 Test: blob_create_internal ...passed 00:25:25.241 Test: blob_create_zero_extent ...passed 00:25:25.241 Test: blob_snapshot ...passed 00:25:25.241 Test: blob_clone ...passed 00:25:25.241 Test: blob_inflate ...[2024-07-22 16:02:27.199942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:25:25.241 passed 00:25:25.241 Test: blob_delete ...passed 00:25:25.241 Test: blob_resize_test ...[2024-07-22 16:02:27.278084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:25:25.241 passed 00:25:25.241 Test: channel_ops ...passed 00:25:25.241 Test: blob_super ...passed 00:25:25.241 Test: blob_rw_verify_iov ...passed 00:25:25.241 Test: blob_unmap ...passed 00:25:25.241 Test: blob_iter ...passed 00:25:25.241 Test: blob_parse_md ...passed 00:25:25.241 Test: bs_load_pending_removal ...passed 00:25:25.241 Test: bs_unload ...[2024-07-22 16:02:27.598197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:25:25.241 passed 00:25:25.241 Test: bs_usable_clusters ...passed 00:25:25.241 Test: blob_crc ...[2024-07-22 16:02:27.678293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:25.241 [2024-07-22 16:02:27.678488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:25.241 passed 00:25:25.241 Test: blob_flags ...passed 00:25:25.241 Test: bs_version ...passed 00:25:25.241 Test: blob_set_xattrs_test ...[2024-07-22 16:02:27.801257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:25.241 [2024-07-22 16:02:27.801367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:25.241 passed 00:25:25.241 Test: blob_thin_prov_alloc ...passed 00:25:25.241 Test: blob_insert_cluster_msg_test ...passed 00:25:25.241 Test: blob_thin_prov_rw ...passed 00:25:25.241 Test: blob_thin_prov_rle ...passed 00:25:25.241 Test: blob_thin_prov_rw_iov ...passed 00:25:25.241 Test: blob_snapshot_rw ...passed 00:25:25.241 Test: blob_snapshot_rw_iov ...passed 00:25:25.241 Test: blob_inflate_rw ...passed 00:25:25.241 Test: blob_snapshot_freeze_io ...passed 00:25:25.241 Test: blob_operation_split_rw ...passed 00:25:25.241 Test: blob_operation_split_rw_iov ...passed 00:25:25.241 Test: blob_simultaneous_operations ...[2024-07-22 16:02:28.891575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:25.241 [2024-07-22 16:02:28.891726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:28.893254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:25.241 [2024-07-22 16:02:28.893310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:28.907204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:25.241 [2024-07-22 16:02:28.907289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 [2024-07-22 16:02:28.907430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:25.241 [2024-07-22 16:02:28.907469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:25.241 passed 00:25:25.241 Test: blob_persist_test ...passed 00:25:25.241 Test: blob_decouple_snapshot ...passed 00:25:25.241 Test: blob_seek_io_unit ...passed 00:25:25.241 Test: blob_nested_freezes ...passed 00:25:25.241 Suite: blob_blob_nocopy_noextent 00:25:25.241 Test: blob_write ...passed 00:25:25.241 Test: blob_read ...passed 00:25:25.241 Test: blob_rw_verify ...passed 00:25:25.241 Test: blob_rw_verify_iov_nomem ...passed 00:25:25.241 Test: blob_rw_iov_read_only ...passed 00:25:25.241 Test: blob_xattr ...passed 00:25:25.241 Test: blob_dirty_shutdown ...passed 00:25:25.241 Test: blob_is_degraded ...passed 00:25:25.241 Suite: blob_esnap_bs_nocopy_noextent 00:25:25.500 Test: blob_esnap_create ...passed 00:25:25.500 Test: blob_esnap_thread_add_remove ...passed 00:25:25.500 Test: blob_esnap_clone_snapshot ...passed 00:25:25.500 Test: blob_esnap_clone_inflate ...passed 00:25:25.500 Test: blob_esnap_clone_decouple ...passed 00:25:25.500 Test: blob_esnap_clone_reload ...passed 00:25:25.500 Test: blob_esnap_hotplug ...passed 00:25:25.500 Suite: blob_nocopy_extent 00:25:25.500 Test: blob_init ...[2024-07-22 16:02:29.764496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:25:25.758 passed 00:25:25.758 Test: blob_thin_provision ...passed 00:25:25.758 Test: blob_read_only ...passed 00:25:25.758 Test: bs_load ...[2024-07-22 16:02:29.821177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:25:25.759 passed 00:25:25.759 Test: bs_load_custom_cluster_size ...passed 00:25:25.759 Test: bs_load_after_failed_grow ...passed 00:25:25.759 Test: bs_cluster_sz ...[2024-07-22 16:02:29.852638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:25:25.759 [2024-07-22 16:02:29.853008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:25:25.759 [2024-07-22 16:02:29.853085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:25:25.759 passed 00:25:25.759 Test: bs_resize_md ...passed 00:25:25.759 Test: bs_destroy ...passed 00:25:25.759 Test: bs_type ...passed 00:25:25.759 Test: bs_super_block ...passed 00:25:25.759 Test: bs_test_recover_cluster_count ...passed 00:25:25.759 Test: bs_grow_live ...passed 00:25:25.759 Test: bs_grow_live_no_space ...passed 00:25:25.759 Test: bs_test_grow ...passed 00:25:25.759 Test: blob_serialize_test ...passed 00:25:25.759 Test: super_block_crc ...passed 00:25:25.759 Test: blob_thin_prov_write_count_io ...passed 00:25:25.759 Test: bs_load_iter_test ...passed 00:25:26.017 Test: blob_relations ...[2024-07-22 16:02:30.036364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.036467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 [2024-07-22 16:02:30.037603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.037660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 passed 00:25:26.017 Test: blob_relations2 ...[2024-07-22 16:02:30.054503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.054596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 [2024-07-22 16:02:30.054629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.054646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 [2024-07-22 16:02:30.056386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.056457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 [2024-07-22 16:02:30.056934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:26.017 [2024-07-22 16:02:30.056983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.017 passed 00:25:26.017 Test: blob_relations3 ...passed 00:25:26.017 Test: blobstore_clean_power_failure ...passed 00:25:26.017 Test: blob_delete_snapshot_power_failure ...[2024-07-22 16:02:30.249604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:26.017 [2024-07-22 16:02:30.264546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:26.017 [2024-07-22 16:02:30.279453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:26.017 [2024-07-22 16:02:30.279568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:26.017 [2024-07-22 16:02:30.279629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.275 [2024-07-22 16:02:30.294382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:26.275 [2024-07-22 16:02:30.294484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:26.275 [2024-07-22 16:02:30.294514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:26.276 [2024-07-22 16:02:30.294539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.276 [2024-07-22 16:02:30.309033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:26.276 [2024-07-22 16:02:30.309149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:26.276 [2024-07-22 16:02:30.309176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:26.276 [2024-07-22 16:02:30.309204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.276 [2024-07-22 16:02:30.323572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:25:26.276 [2024-07-22 16:02:30.323695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.276 [2024-07-22 16:02:30.337924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:25:26.276 [2024-07-22 16:02:30.338099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.276 [2024-07-22 16:02:30.352429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:25:26.276 [2024-07-22 16:02:30.352543] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:26.276 passed 00:25:26.276 Test: blob_create_snapshot_power_failure ...[2024-07-22 16:02:30.395279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:26.276 [2024-07-22 16:02:30.410022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:26.276 [2024-07-22 16:02:30.439195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:26.276 [2024-07-22 16:02:30.454321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:25:26.276 passed 00:25:26.276 Test: blob_io_unit ...passed 00:25:26.276 Test: blob_io_unit_compatibility ...passed 00:25:26.534 Test: blob_ext_md_pages ...passed 00:25:26.534 Test: blob_esnap_io_4096_4096 ...passed 00:25:26.534 Test: blob_esnap_io_512_512 ...passed 00:25:26.534 Test: blob_esnap_io_4096_512 ...passed 00:25:26.534 Test: blob_esnap_io_512_4096 ...passed 00:25:26.534 Suite: blob_bs_nocopy_extent 00:25:26.534 Test: blob_open ...passed 00:25:26.534 Test: blob_create ...[2024-07-22 16:02:30.745381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:25:26.534 passed 00:25:26.793 Test: blob_create_loop ...passed 00:25:26.793 Test: blob_create_fail ...[2024-07-22 16:02:30.870683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:26.793 passed 00:25:26.793 Test: blob_create_internal ...passed 00:25:26.793 Test: blob_create_zero_extent ...passed 00:25:26.793 Test: blob_snapshot ...passed 00:25:26.793 Test: blob_clone ...passed 00:25:27.051 Test: blob_inflate ...[2024-07-22 16:02:31.077264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:25:27.051 passed 00:25:27.051 Test: blob_delete ...passed 00:25:27.051 Test: blob_resize_test ...[2024-07-22 16:02:31.151968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:25:27.051 passed 00:25:27.051 Test: channel_ops ...passed 00:25:27.051 Test: blob_super ...passed 00:25:27.051 Test: blob_rw_verify_iov ...passed 00:25:27.310 Test: blob_unmap ...passed 00:25:27.310 Test: blob_iter ...passed 00:25:27.310 Test: blob_parse_md ...passed 00:25:27.310 Test: bs_load_pending_removal ...passed 00:25:27.310 Test: bs_unload ...[2024-07-22 16:02:31.470548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:25:27.310 passed 00:25:27.310 Test: bs_usable_clusters ...passed 00:25:27.310 Test: blob_crc ...[2024-07-22 16:02:31.551631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:27.310 [2024-07-22 16:02:31.551779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:27.310 passed 00:25:27.568 Test: blob_flags ...passed 00:25:27.568 Test: bs_version ...passed 00:25:27.568 Test: blob_set_xattrs_test ...[2024-07-22 16:02:31.671460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:27.568 [2024-07-22 16:02:31.671586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:27.568 passed 00:25:27.568 Test: blob_thin_prov_alloc ...passed 00:25:27.826 Test: blob_insert_cluster_msg_test ...passed 00:25:27.826 Test: blob_thin_prov_rw ...passed 00:25:27.826 Test: blob_thin_prov_rle ...passed 00:25:27.826 Test: blob_thin_prov_rw_iov ...passed 00:25:27.826 Test: blob_snapshot_rw ...passed 00:25:27.826 Test: blob_snapshot_rw_iov ...passed 00:25:28.085 Test: blob_inflate_rw ...passed 00:25:28.085 Test: blob_snapshot_freeze_io ...passed 00:25:28.343 Test: blob_operation_split_rw ...passed 00:25:28.601 Test: blob_operation_split_rw_iov ...passed 00:25:28.601 Test: blob_simultaneous_operations ...[2024-07-22 16:02:32.684911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:28.601 [2024-07-22 16:02:32.685057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:28.601 [2024-07-22 16:02:32.686428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:28.601 [2024-07-22 16:02:32.686487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:28.601 [2024-07-22 16:02:32.699335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:28.601 [2024-07-22 16:02:32.699397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:28.601 [2024-07-22 16:02:32.699556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:28.601 [2024-07-22 16:02:32.699577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:28.601 passed 00:25:28.601 Test: blob_persist_test ...passed 00:25:28.601 Test: blob_decouple_snapshot ...passed 00:25:28.859 Test: blob_seek_io_unit ...passed 00:25:28.859 Test: blob_nested_freezes ...passed 00:25:28.859 Suite: blob_blob_nocopy_extent 00:25:28.859 Test: blob_write ...passed 00:25:28.859 Test: blob_read ...passed 00:25:28.859 Test: blob_rw_verify ...passed 00:25:28.859 Test: blob_rw_verify_iov_nomem ...passed 00:25:28.859 Test: blob_rw_iov_read_only ...passed 00:25:29.117 Test: blob_xattr ...passed 00:25:29.117 Test: blob_dirty_shutdown ...passed 00:25:29.117 Test: blob_is_degraded ...passed 00:25:29.117 Suite: blob_esnap_bs_nocopy_extent 00:25:29.117 Test: blob_esnap_create ...passed 00:25:29.117 Test: blob_esnap_thread_add_remove ...passed 00:25:29.117 Test: blob_esnap_clone_snapshot ...passed 00:25:29.382 Test: blob_esnap_clone_inflate ...passed 00:25:29.382 Test: blob_esnap_clone_decouple ...passed 00:25:29.382 Test: blob_esnap_clone_reload ...passed 00:25:29.382 Test: blob_esnap_hotplug ...passed 00:25:29.382 Suite: blob_copy_noextent 00:25:29.382 Test: blob_init ...[2024-07-22 16:02:33.506507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:25:29.382 passed 00:25:29.382 Test: blob_thin_provision ...passed 00:25:29.382 Test: blob_read_only ...passed 00:25:29.382 Test: bs_load ...[2024-07-22 16:02:33.558055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:25:29.382 passed 00:25:29.382 Test: bs_load_custom_cluster_size ...passed 00:25:29.382 Test: bs_load_after_failed_grow ...passed 00:25:29.382 Test: bs_cluster_sz ...[2024-07-22 16:02:33.586027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:25:29.382 [2024-07-22 16:02:33.586259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:25:29.382 [2024-07-22 16:02:33.586324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:25:29.382 passed 00:25:29.382 Test: bs_resize_md ...passed 00:25:29.382 Test: bs_destroy ...passed 00:25:29.640 Test: bs_type ...passed 00:25:29.640 Test: bs_super_block ...passed 00:25:29.640 Test: bs_test_recover_cluster_count ...passed 00:25:29.640 Test: bs_grow_live ...passed 00:25:29.640 Test: bs_grow_live_no_space ...passed 00:25:29.640 Test: bs_test_grow ...passed 00:25:29.640 Test: blob_serialize_test ...passed 00:25:29.640 Test: super_block_crc ...passed 00:25:29.640 Test: blob_thin_prov_write_count_io ...passed 00:25:29.640 Test: bs_load_iter_test ...passed 00:25:29.640 Test: blob_relations ...[2024-07-22 16:02:33.764582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.764725] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 [2024-07-22 16:02:33.765514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.765570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 passed 00:25:29.640 Test: blob_relations2 ...[2024-07-22 16:02:33.781113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.781212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 [2024-07-22 16:02:33.781238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.781252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 [2024-07-22 16:02:33.782354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.782416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 [2024-07-22 16:02:33.782753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:29.640 [2024-07-22 16:02:33.782804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.640 passed 00:25:29.640 Test: blob_relations3 ...passed 00:25:29.899 Test: blobstore_clean_power_failure ...passed 00:25:29.899 Test: blob_delete_snapshot_power_failure ...[2024-07-22 16:02:33.961687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:29.899 [2024-07-22 16:02:33.975248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:29.899 [2024-07-22 16:02:33.975351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:29.899 [2024-07-22 16:02:33.975374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.899 [2024-07-22 16:02:33.988496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:29.899 [2024-07-22 16:02:33.988617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:29.899 [2024-07-22 16:02:33.988636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:29.899 [2024-07-22 16:02:33.988656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.899 [2024-07-22 16:02:34.002802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:25:29.899 [2024-07-22 16:02:34.002914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.899 [2024-07-22 16:02:34.016831] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:25:29.899 [2024-07-22 16:02:34.016949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.899 [2024-07-22 16:02:34.031096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:25:29.899 [2024-07-22 16:02:34.031186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:29.899 passed 00:25:29.899 Test: blob_create_snapshot_power_failure ...[2024-07-22 16:02:34.071759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:29.899 [2024-07-22 16:02:34.098488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:25:29.899 [2024-07-22 16:02:34.112109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:25:29.899 passed 00:25:30.158 Test: blob_io_unit ...passed 00:25:30.158 Test: blob_io_unit_compatibility ...passed 00:25:30.158 Test: blob_ext_md_pages ...passed 00:25:30.158 Test: blob_esnap_io_4096_4096 ...passed 00:25:30.158 Test: blob_esnap_io_512_512 ...passed 00:25:30.158 Test: blob_esnap_io_4096_512 ...passed 00:25:30.158 Test: blob_esnap_io_512_4096 ...passed 00:25:30.158 Suite: blob_bs_copy_noextent 00:25:30.158 Test: blob_open ...passed 00:25:30.158 Test: blob_create ...[2024-07-22 16:02:34.402205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:25:30.158 passed 00:25:30.416 Test: blob_create_loop ...passed 00:25:30.416 Test: blob_create_fail ...[2024-07-22 16:02:34.514065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:30.416 passed 00:25:30.416 Test: blob_create_internal ...passed 00:25:30.416 Test: blob_create_zero_extent ...passed 00:25:30.416 Test: blob_snapshot ...passed 00:25:30.416 Test: blob_clone ...passed 00:25:30.675 Test: blob_inflate ...[2024-07-22 16:02:34.716788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:25:30.675 passed 00:25:30.675 Test: blob_delete ...passed 00:25:30.675 Test: blob_resize_test ...[2024-07-22 16:02:34.793595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:25:30.675 passed 00:25:30.675 Test: channel_ops ...passed 00:25:30.675 Test: blob_super ...passed 00:25:30.675 Test: blob_rw_verify_iov ...passed 00:25:30.933 Test: blob_unmap ...passed 00:25:30.933 Test: blob_iter ...passed 00:25:30.933 Test: blob_parse_md ...passed 00:25:30.933 Test: bs_load_pending_removal ...passed 00:25:30.933 Test: bs_unload ...[2024-07-22 16:02:35.107906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:25:30.933 passed 00:25:30.933 Test: bs_usable_clusters ...passed 00:25:30.933 Test: blob_crc ...[2024-07-22 16:02:35.186406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:30.933 [2024-07-22 16:02:35.186522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:30.933 passed 00:25:31.191 Test: blob_flags ...passed 00:25:31.191 Test: bs_version ...passed 00:25:31.191 Test: blob_set_xattrs_test ...[2024-07-22 16:02:35.309346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:31.191 [2024-07-22 16:02:35.309456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:31.191 passed 00:25:31.450 Test: blob_thin_prov_alloc ...passed 00:25:31.450 Test: blob_insert_cluster_msg_test ...passed 00:25:31.450 Test: blob_thin_prov_rw ...passed 00:25:31.450 Test: blob_thin_prov_rle ...passed 00:25:31.450 Test: blob_thin_prov_rw_iov ...passed 00:25:31.450 Test: blob_snapshot_rw ...passed 00:25:31.450 Test: blob_snapshot_rw_iov ...passed 00:25:31.709 Test: blob_inflate_rw ...passed 00:25:31.968 Test: blob_snapshot_freeze_io ...passed 00:25:31.968 Test: blob_operation_split_rw ...passed 00:25:32.227 Test: blob_operation_split_rw_iov ...passed 00:25:32.227 Test: blob_simultaneous_operations ...[2024-07-22 16:02:36.354189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:32.227 [2024-07-22 16:02:36.354300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:32.227 [2024-07-22 16:02:36.354832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:32.227 [2024-07-22 16:02:36.354865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:32.227 [2024-07-22 16:02:36.357989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:32.227 [2024-07-22 16:02:36.358042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:32.227 [2024-07-22 16:02:36.358155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:32.227 [2024-07-22 16:02:36.358177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:32.227 passed 00:25:32.227 Test: blob_persist_test ...passed 00:25:32.227 Test: blob_decouple_snapshot ...passed 00:25:32.485 Test: blob_seek_io_unit ...passed 00:25:32.485 Test: blob_nested_freezes ...passed 00:25:32.485 Suite: blob_blob_copy_noextent 00:25:32.485 Test: blob_write ...passed 00:25:32.485 Test: blob_read ...passed 00:25:32.485 Test: blob_rw_verify ...passed 00:25:32.485 Test: blob_rw_verify_iov_nomem ...passed 00:25:32.485 Test: blob_rw_iov_read_only ...passed 00:25:32.816 Test: blob_xattr ...passed 00:25:32.816 Test: blob_dirty_shutdown ...passed 00:25:32.816 Test: blob_is_degraded ...passed 00:25:32.816 Suite: blob_esnap_bs_copy_noextent 00:25:32.816 Test: blob_esnap_create ...passed 00:25:32.816 Test: blob_esnap_thread_add_remove ...passed 00:25:33.075 Test: blob_esnap_clone_snapshot ...passed 00:25:33.075 Test: blob_esnap_clone_inflate ...passed 00:25:33.075 Test: blob_esnap_clone_decouple ...passed 00:25:33.075 Test: blob_esnap_clone_reload ...passed 00:25:33.075 Test: blob_esnap_hotplug ...passed 00:25:33.075 Suite: blob_copy_extent 00:25:33.075 Test: blob_init ...[2024-07-22 16:02:37.340269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:25:33.364 passed 00:25:33.364 Test: blob_thin_provision ...passed 00:25:33.364 Test: blob_read_only ...passed 00:25:33.364 Test: bs_load ...[2024-07-22 16:02:37.416242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:25:33.364 passed 00:25:33.364 Test: bs_load_custom_cluster_size ...passed 00:25:33.364 Test: bs_load_after_failed_grow ...passed 00:25:33.364 Test: bs_cluster_sz ...[2024-07-22 16:02:37.457119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:25:33.364 [2024-07-22 16:02:37.457356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:25:33.364 [2024-07-22 16:02:37.457403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:25:33.364 passed 00:25:33.364 Test: bs_resize_md ...passed 00:25:33.364 Test: bs_destroy ...passed 00:25:33.364 Test: bs_type ...passed 00:25:33.364 Test: bs_super_block ...passed 00:25:33.364 Test: bs_test_recover_cluster_count ...passed 00:25:33.364 Test: bs_grow_live ...passed 00:25:33.364 Test: bs_grow_live_no_space ...passed 00:25:33.364 Test: bs_test_grow ...passed 00:25:33.623 Test: blob_serialize_test ...passed 00:25:33.623 Test: super_block_crc ...passed 00:25:33.623 Test: blob_thin_prov_write_count_io ...passed 00:25:33.623 Test: bs_load_iter_test ...passed 00:25:33.623 Test: blob_relations ...[2024-07-22 16:02:37.711516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.711635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 [2024-07-22 16:02:37.712803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.712850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 passed 00:25:33.623 Test: blob_relations2 ...[2024-07-22 16:02:37.735577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.735674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 [2024-07-22 16:02:37.735706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.735721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 [2024-07-22 16:02:37.737538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.737588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 [2024-07-22 16:02:37.738085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:25:33.623 [2024-07-22 16:02:37.738130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.623 passed 00:25:33.623 Test: blob_relations3 ...passed 00:25:33.882 Test: blobstore_clean_power_failure ...passed 00:25:33.882 Test: blob_delete_snapshot_power_failure ...[2024-07-22 16:02:38.019331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:33.882 [2024-07-22 16:02:38.045952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:33.882 [2024-07-22 16:02:38.068415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:33.882 [2024-07-22 16:02:38.068549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:33.882 [2024-07-22 16:02:38.068577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.882 [2024-07-22 16:02:38.089649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:33.882 [2024-07-22 16:02:38.089794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:33.882 [2024-07-22 16:02:38.089815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:33.882 [2024-07-22 16:02:38.089839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.882 [2024-07-22 16:02:38.110876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:33.882 [2024-07-22 16:02:38.110981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:25:33.882 [2024-07-22 16:02:38.111013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:25:33.882 [2024-07-22 16:02:38.111039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:33.882 [2024-07-22 16:02:38.131655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:25:33.882 [2024-07-22 16:02:38.131768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:34.141 [2024-07-22 16:02:38.155052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:25:34.141 [2024-07-22 16:02:38.155212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:34.141 [2024-07-22 16:02:38.178302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:25:34.141 [2024-07-22 16:02:38.178511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:34.141 passed 00:25:34.141 Test: blob_create_snapshot_power_failure ...[2024-07-22 16:02:38.242216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:25:34.141 [2024-07-22 16:02:38.263357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:25:34.141 [2024-07-22 16:02:38.307560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:25:34.141 [2024-07-22 16:02:38.329133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:25:34.141 passed 00:25:34.400 Test: blob_io_unit ...passed 00:25:34.401 Test: blob_io_unit_compatibility ...passed 00:25:34.401 Test: blob_ext_md_pages ...passed 00:25:34.401 Test: blob_esnap_io_4096_4096 ...passed 00:25:34.401 Test: blob_esnap_io_512_512 ...passed 00:25:34.401 Test: blob_esnap_io_4096_512 ...passed 00:25:34.401 Test: blob_esnap_io_512_4096 ...passed 00:25:34.401 Suite: blob_bs_copy_extent 00:25:34.661 Test: blob_open ...passed 00:25:34.661 Test: blob_create ...[2024-07-22 16:02:38.701707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:25:34.661 passed 00:25:34.661 Test: blob_create_loop ...passed 00:25:34.661 Test: blob_create_fail ...[2024-07-22 16:02:38.822592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:34.661 passed 00:25:34.661 Test: blob_create_internal ...passed 00:25:34.661 Test: blob_create_zero_extent ...passed 00:25:34.920 Test: blob_snapshot ...passed 00:25:34.920 Test: blob_clone ...passed 00:25:34.920 Test: blob_inflate ...[2024-07-22 16:02:39.029214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:25:34.920 passed 00:25:34.920 Test: blob_delete ...passed 00:25:34.920 Test: blob_resize_test ...[2024-07-22 16:02:39.106286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:25:34.920 passed 00:25:34.920 Test: channel_ops ...passed 00:25:35.178 Test: blob_super ...passed 00:25:35.178 Test: blob_rw_verify_iov ...passed 00:25:35.178 Test: blob_unmap ...passed 00:25:35.178 Test: blob_iter ...passed 00:25:35.178 Test: blob_parse_md ...passed 00:25:35.178 Test: bs_load_pending_removal ...passed 00:25:35.178 Test: bs_unload ...[2024-07-22 16:02:39.429036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:25:35.178 passed 00:25:35.437 Test: bs_usable_clusters ...passed 00:25:35.437 Test: blob_crc ...[2024-07-22 16:02:39.512407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:35.437 [2024-07-22 16:02:39.512824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:25:35.437 passed 00:25:35.437 Test: blob_flags ...passed 00:25:35.437 Test: bs_version ...passed 00:25:35.437 Test: blob_set_xattrs_test ...[2024-07-22 16:02:39.633469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:35.437 [2024-07-22 16:02:39.633932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:25:35.437 passed 00:25:35.696 Test: blob_thin_prov_alloc ...passed 00:25:35.696 Test: blob_insert_cluster_msg_test ...passed 00:25:35.696 Test: blob_thin_prov_rw ...passed 00:25:35.696 Test: blob_thin_prov_rle ...passed 00:25:35.696 Test: blob_thin_prov_rw_iov ...passed 00:25:36.044 Test: blob_snapshot_rw ...passed 00:25:36.044 Test: blob_snapshot_rw_iov ...passed 00:25:36.044 Test: blob_inflate_rw ...passed 00:25:36.044 Test: blob_snapshot_freeze_io ...passed 00:25:36.302 Test: blob_operation_split_rw ...passed 00:25:36.560 Test: blob_operation_split_rw_iov ...passed 00:25:36.560 Test: blob_simultaneous_operations ...[2024-07-22 16:02:40.649249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:36.560 [2024-07-22 16:02:40.649370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:36.560 [2024-07-22 16:02:40.649889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:36.560 [2024-07-22 16:02:40.649916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:36.560 [2024-07-22 16:02:40.652836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:36.560 [2024-07-22 16:02:40.652879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:36.560 [2024-07-22 16:02:40.652978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:25:36.560 [2024-07-22 16:02:40.653027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:25:36.560 passed 00:25:36.560 Test: blob_persist_test ...passed 00:25:36.560 Test: blob_decouple_snapshot ...passed 00:25:36.561 Test: blob_seek_io_unit ...passed 00:25:36.819 Test: blob_nested_freezes ...passed 00:25:36.819 Suite: blob_blob_copy_extent 00:25:36.819 Test: blob_write ...passed 00:25:36.819 Test: blob_read ...passed 00:25:36.819 Test: blob_rw_verify ...passed 00:25:36.819 Test: blob_rw_verify_iov_nomem ...passed 00:25:36.819 Test: blob_rw_iov_read_only ...passed 00:25:37.077 Test: blob_xattr ...passed 00:25:37.077 Test: blob_dirty_shutdown ...passed 00:25:37.077 Test: blob_is_degraded ...passed 00:25:37.077 Suite: blob_esnap_bs_copy_extent 00:25:37.077 Test: blob_esnap_create ...passed 00:25:37.077 Test: blob_esnap_thread_add_remove ...passed 00:25:37.077 Test: blob_esnap_clone_snapshot ...passed 00:25:37.335 Test: blob_esnap_clone_inflate ...passed 00:25:37.335 Test: blob_esnap_clone_decouple ...passed 00:25:37.335 Test: blob_esnap_clone_reload ...passed 00:25:37.335 Test: blob_esnap_hotplug ...passed 00:25:37.335 00:25:37.335 Run Summary: Type Total Ran Passed Failed Inactive 00:25:37.335 suites 16 16 n/a 0 0 00:25:37.335 tests 348 348 348 0 0 00:25:37.335 asserts 92605 92605 92605 0 n/a 00:25:37.335 00:25:37.335 Elapsed time = 15.632 seconds 00:25:37.335 16:02:41 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:25:37.335 00:25:37.335 00:25:37.335 CUnit - A unit testing framework for C - Version 2.1-3 00:25:37.335 http://cunit.sourceforge.net/ 00:25:37.335 00:25:37.335 00:25:37.335 Suite: blob_bdev 00:25:37.335 Test: create_bs_dev ...passed 00:25:37.335 Test: create_bs_dev_ro ...passed 00:25:37.335 Test: create_bs_dev_rw ...passed 00:25:37.335 Test: claim_bs_dev ...passed 00:25:37.335 Test: claim_bs_dev_ro ...passed 00:25:37.335 Test: deferred_destroy_refs ...[2024-07-22 16:02:41.586544] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:25:37.335 [2024-07-22 16:02:41.586958] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:25:37.335 passed 00:25:37.335 Test: deferred_destroy_channels ...passed 00:25:37.335 Test: deferred_destroy_threads ...passed 00:25:37.335 00:25:37.335 Run Summary: Type Total Ran Passed Failed Inactive 00:25:37.335 suites 1 1 n/a 0 0 00:25:37.335 tests 8 8 8 0 0 00:25:37.335 asserts 119 119 119 0 n/a 00:25:37.335 00:25:37.335 Elapsed time = 0.001 seconds 00:25:37.335 16:02:41 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:25:37.594 00:25:37.594 00:25:37.594 CUnit - A unit testing framework for C - Version 2.1-3 00:25:37.594 http://cunit.sourceforge.net/ 00:25:37.594 00:25:37.594 00:25:37.594 Suite: tree 00:25:37.594 Test: blobfs_tree_op_test ...passed 00:25:37.594 00:25:37.594 Run Summary: Type Total Ran Passed Failed Inactive 00:25:37.594 suites 1 1 n/a 0 0 00:25:37.594 tests 1 1 1 0 0 00:25:37.594 asserts 27 27 27 0 n/a 00:25:37.594 00:25:37.594 Elapsed time = 0.000 seconds 00:25:37.594 16:02:41 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:25:37.594 00:25:37.594 00:25:37.594 CUnit - A unit testing framework for C - Version 2.1-3 00:25:37.594 http://cunit.sourceforge.net/ 00:25:37.594 00:25:37.594 00:25:37.594 Suite: blobfs_async_ut 00:25:37.594 Test: fs_init ...passed 00:25:37.594 Test: fs_open ...passed 00:25:37.594 Test: fs_create ...passed 00:25:37.594 Test: fs_truncate ...passed 00:25:37.594 Test: fs_rename ...[2024-07-22 16:02:41.801245] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:25:37.594 passed 00:25:37.594 Test: fs_rw_async ...passed 00:25:37.594 Test: fs_writev_readv_async ...passed 00:25:37.594 Test: tree_find_buffer_ut ...passed 00:25:37.594 Test: channel_ops ...passed 00:25:37.853 Test: channel_ops_sync ...passed 00:25:37.853 00:25:37.853 Run Summary: Type Total Ran Passed Failed Inactive 00:25:37.853 suites 1 1 n/a 0 0 00:25:37.853 tests 10 10 10 0 0 00:25:37.853 asserts 292 292 292 0 n/a 00:25:37.853 00:25:37.853 Elapsed time = 0.212 seconds 00:25:37.853 16:02:41 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:25:37.853 00:25:37.853 00:25:37.853 CUnit - A unit testing framework for C - Version 2.1-3 00:25:37.853 http://cunit.sourceforge.net/ 00:25:37.853 00:25:37.853 00:25:37.853 Suite: blobfs_sync_ut 00:25:37.853 Test: cache_read_after_write ...[2024-07-22 16:02:42.020462] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:25:37.853 passed 00:25:37.853 Test: file_length ...passed 00:25:37.853 Test: append_write_to_extend_blob ...passed 00:25:37.853 Test: partial_buffer ...passed 00:25:37.853 Test: cache_write_null_buffer ...passed 00:25:37.853 Test: fs_create_sync ...passed 00:25:38.113 Test: fs_rename_sync ...passed 00:25:38.113 Test: cache_append_no_cache ...passed 00:25:38.113 Test: fs_delete_file_without_close ...passed 00:25:38.113 00:25:38.113 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.113 suites 1 1 n/a 0 0 00:25:38.113 tests 9 9 9 0 0 00:25:38.113 asserts 345 345 345 0 n/a 00:25:38.113 00:25:38.113 Elapsed time = 0.427 seconds 00:25:38.113 16:02:42 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:25:38.113 00:25:38.113 00:25:38.113 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.113 http://cunit.sourceforge.net/ 00:25:38.113 00:25:38.113 00:25:38.113 Suite: blobfs_bdev_ut 00:25:38.113 Test: spdk_blobfs_bdev_detect_test ...passed 00:25:38.113 Test: spdk_blobfs_bdev_create_test ...passed 00:25:38.113 Test: spdk_blobfs_bdev_mount_test ...passed 00:25:38.113 00:25:38.113 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.113 suites 1 1 n/a 0 0 00:25:38.113 tests 3 3 3 0 0 00:25:38.113 asserts 9 9 9 0 n/a 00:25:38.113 00:25:38.113 Elapsed time = 0.001 seconds 00:25:38.113 [2024-07-22 16:02:42.230515] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:25:38.113 [2024-07-22 16:02:42.230823] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:25:38.113 00:25:38.113 real 0m16.434s 00:25:38.113 user 0m15.788s 00:25:38.113 sys 0m0.851s 00:25:38.113 16:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.113 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 ************************************ 00:25:38.113 END TEST unittest_blob_blobfs 00:25:38.113 ************************************ 00:25:38.113 16:02:42 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:25:38.113 16:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.113 16:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.113 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:25:38.113 ************************************ 00:25:38.113 START TEST unittest_event 00:25:38.113 ************************************ 00:25:38.113 16:02:42 -- common/autotest_common.sh@1104 -- # unittest_event 00:25:38.113 16:02:42 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:25:38.113 00:25:38.113 00:25:38.113 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.113 http://cunit.sourceforge.net/ 00:25:38.113 00:25:38.113 00:25:38.113 Suite: app_suite 00:25:38.113 Test: test_spdk_app_parse_args ...app_ut [options] 00:25:38.113 options: 00:25:38.113 -c, --config JSON config file (default none) 00:25:38.113 --json JSON config file (default none) 00:25:38.113 --json-ignore-init-errors 00:25:38.113 don't exit on invalid config entry 00:25:38.113 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:25:38.113 -g, --single-file-segments 00:25:38.113 force creating just one hugetlbfs file 00:25:38.113 -h, --help show this usage 00:25:38.113 -i, --shm-id shared memory ID (optional) 00:25:38.113 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:25:38.113 --lcores lcore to CPU mapping list. The list is in the format: 00:25:38.113 [<,lcores[@CPUs]>...] 00:25:38.113 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:25:38.113 Within the group, '-' is used for range separator, 00:25:38.113 ',' is used for single number separator. 00:25:38.113 '( )' can be omitted for single element group, 00:25:38.113 '@' can be omitted if cpus and lcores have the same value 00:25:38.113 -n, --mem-channels channel number of memory channels used for DPDK 00:25:38.113 -p, --main-core main (primary) core for DPDK 00:25:38.113 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:25:38.113 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:25:38.113 --disable-cpumask-locks Disable CPU core lock files. 00:25:38.113 --silence-noticelog disable notice level logging to stderr 00:25:38.113 --msg-mempool-size global message memory pool size in count (default: 262143) 00:25:38.113 -u, --no-pci disable PCI access 00:25:38.113 --wait-for-rpc wait for RPCs to initialize subsystems 00:25:38.113 --max-delay maximum reactor delay (in microseconds) 00:25:38.113 -B, --pci-blocked pci addr to block (can be used more than once) 00:25:38.113 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:25:38.113 -R, --huge-unlink unlink huge files after initialization 00:25:38.113 -v, --version print SPDK version 00:25:38.113 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:25:38.113 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:25:38.113 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:25:38.113 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:25:38.113 Tracepoints vary in size and can use more than one trace entry. 00:25:38.113 --rpcs-allowed comma-separated list of permitted RPCS 00:25:38.113 --env-context Opaque context for use of the env implementation 00:25:38.113 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:25:38.113 --no-huge run without using hugepages 00:25:38.113 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:25:38.113 -e, --tpoint-group [:] 00:25:38.113 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:25:38.113 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:25:38.113 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:25:38.113 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:25:38.113 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:25:38.113 app_ut [options] 00:25:38.113 options: 00:25:38.113 -c, --config JSON config file (default none) 00:25:38.113 --json JSON config file (default none) 00:25:38.113 --json-ignore-init-errors 00:25:38.113 don't exit on invalid config entry 00:25:38.113 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:25:38.113 -g, --single-file-segments 00:25:38.113 force creating just one hugetlbfs file 00:25:38.113 -h, --help show this usage 00:25:38.113 -i, --shm-id shared memory ID (optional) 00:25:38.113 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:25:38.113 --lcores lcore to CPU mapping list. The list is in the format: 00:25:38.114 [<,lcores[@CPUs]>...] 00:25:38.114 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:25:38.114 Within the group, '-' is used for range separator, 00:25:38.114 ',' is used for single number separator. 00:25:38.114 '( )' can be omitted for single element group, 00:25:38.114 '@' can be omitted if cpus and lcores have the same value 00:25:38.114 -n, --mem-channels channel number of memory channels used for DPDK 00:25:38.114 -p, --main-core main (primary) core for DPDK 00:25:38.114 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:25:38.114 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:25:38.114 --disable-cpumask-locks Disable CPU core lock files. 00:25:38.114 --silence-noticelog disable notice level logging to stderr 00:25:38.114 --msg-mempool-size global message memory pool size in count (default: 262143) 00:25:38.114 -u, --no-pci disable PCI access 00:25:38.114 --wait-for-rpc wait for RPCs to initialize subsystems 00:25:38.114 --max-delay maximum reactor delay (in microseconds) 00:25:38.114 -B, --pci-blocked pci addr to block (can be used more than once) 00:25:38.114 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:25:38.114 -R, --huge-unlink unlink huge files after initialization 00:25:38.114 -v, --version print SPDK version 00:25:38.114 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:25:38.114 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:25:38.114 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:25:38.114 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:25:38.114 Tracepoints vary in size and can use more than one trace entry. 00:25:38.114 --rpcs-allowed comma-separated list of permitted RPCS 00:25:38.114 --env-context Opaque context for use of the env implementation 00:25:38.114 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:25:38.114 --no-huge run without using hugepages 00:25:38.114 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:25:38.114 -e, --tpoint-group [:] 00:25:38.114 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:25:38.114 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:25:38.114 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:25:38.114 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:25:38.114 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:25:38.114 app_ut: invalid option -- 'z' 00:25:38.114 app_ut: unrecognized option '--test-long-opt' 00:25:38.114 [2024-07-22 16:02:42.315945] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:25:38.114 app_ut [options] 00:25:38.114 options: 00:25:38.114 -c, --config JSON config file (default none) 00:25:38.114 --json JSON config file (default none) 00:25:38.114 --json-ignore-init-errors 00:25:38.114 don't exit on invalid config entry 00:25:38.114 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:25:38.114 -g, --single-file-segments 00:25:38.114 force creating just one hugetlbfs file 00:25:38.114 -h, --help show this usage 00:25:38.114 -i, --shm-id shared memory ID (optional) 00:25:38.114 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:25:38.114 --lcores lcore to CPU mapping list. The list is in the format: 00:25:38.114 [<,lcores[@CPUs]>...] 00:25:38.114 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:25:38.114 Within the group, '-' is used for range separator, 00:25:38.114 ',' is used for single number separator. 00:25:38.114 '( )' can be omitted for single element group, 00:25:38.114 '@' can be omitted if cpus and lcores have the same value[2024-07-22 16:02:42.316209] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:25:38.114 00:25:38.114 -n, --mem-channels channel number of memory channels used for DPDK 00:25:38.114 -p, --main-core main (primary) core for DPDK 00:25:38.114 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:25:38.114 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:25:38.114 --disable-cpumask-locks Disable CPU core lock files. 00:25:38.114 --silence-noticelog disable notice level logging to stderr 00:25:38.114 --msg-mempool-size global message memory pool size in count (default: 262143) 00:25:38.114 -u, --no-pci disable PCI access 00:25:38.114 --wait-for-rpc wait for RPCs to initialize subsystems 00:25:38.114 --max-delay maximum reactor delay (in microseconds) 00:25:38.114 -B, --pci-blocked pci addr to block (can be used more than once) 00:25:38.114 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:25:38.114 -R, --huge-unlink unlink huge files after initialization 00:25:38.114 -v, --version print SPDK version 00:25:38.114 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:25:38.114 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:25:38.114 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:25:38.114 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:25:38.114 Tracepoints vary in size and can use more than one trace entry. 00:25:38.114 --rpcs-allowed comma-separated list of permitted RPCS 00:25:38.114 --env-context Opaque context for use of the env implementation 00:25:38.114 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:25:38.114 --no-huge run without using hugepages 00:25:38.114 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:25:38.114 -e, --tpoint-group [:] 00:25:38.114 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:25:38.114 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:25:38.114 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:25:38.114 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:25:38.114 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:25:38.114 passed 00:25:38.114 00:25:38.114 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.114 suites 1 1 n/a 0 0 00:25:38.114 tests 1 1 1 0 0 00:25:38.114 asserts 8 8 8 0 n/a 00:25:38.114 00:25:38.114 Elapsed time = 0.001 seconds 00:25:38.114 [2024-07-22 16:02:42.316373] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:25:38.114 16:02:42 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:25:38.114 00:25:38.114 00:25:38.114 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.114 http://cunit.sourceforge.net/ 00:25:38.114 00:25:38.114 00:25:38.114 Suite: app_suite 00:25:38.114 Test: test_create_reactor ...passed 00:25:38.114 Test: test_init_reactors ...passed 00:25:38.114 Test: test_event_call ...passed 00:25:38.114 Test: test_schedule_thread ...passed 00:25:38.114 Test: test_reschedule_thread ...passed 00:25:38.114 Test: test_bind_thread ...passed 00:25:38.114 Test: test_for_each_reactor ...passed 00:25:38.114 Test: test_reactor_stats ...passed 00:25:38.114 Test: test_scheduler ...passed 00:25:38.114 Test: test_governor ...passed 00:25:38.114 00:25:38.114 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.114 suites 1 1 n/a 0 0 00:25:38.114 tests 10 10 10 0 0 00:25:38.114 asserts 344 344 344 0 n/a 00:25:38.114 00:25:38.114 Elapsed time = 0.033 seconds 00:25:38.373 ************************************ 00:25:38.373 END TEST unittest_event 00:25:38.373 ************************************ 00:25:38.373 00:25:38.373 real 0m0.107s 00:25:38.373 user 0m0.064s 00:25:38.373 sys 0m0.042s 00:25:38.373 16:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.373 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:25:38.373 16:02:42 -- unit/unittest.sh@233 -- # uname -s 00:25:38.373 16:02:42 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:25:38.373 16:02:42 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:25:38.373 16:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.373 16:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.373 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:25:38.373 ************************************ 00:25:38.373 START TEST unittest_ftl 00:25:38.373 ************************************ 00:25:38.373 16:02:42 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:25:38.373 16:02:42 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:25:38.373 00:25:38.373 00:25:38.373 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.373 http://cunit.sourceforge.net/ 00:25:38.373 00:25:38.373 00:25:38.373 Suite: ftl_band_suite 00:25:38.373 Test: test_band_block_offset_from_addr_base ...passed 00:25:38.373 Test: test_band_block_offset_from_addr_offset ...passed 00:25:38.373 Test: test_band_addr_from_block_offset ...passed 00:25:38.632 Test: test_band_set_addr ...passed 00:25:38.632 Test: test_invalidate_addr ...passed 00:25:38.632 Test: test_next_xfer_addr ...passed 00:25:38.632 00:25:38.632 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.632 suites 1 1 n/a 0 0 00:25:38.632 tests 6 6 6 0 0 00:25:38.632 asserts 30356 30356 30356 0 n/a 00:25:38.632 00:25:38.632 Elapsed time = 0.251 seconds 00:25:38.632 16:02:42 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:25:38.632 00:25:38.632 00:25:38.632 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.632 http://cunit.sourceforge.net/ 00:25:38.632 00:25:38.632 00:25:38.632 Suite: ftl_bitmap 00:25:38.632 Test: test_ftl_bitmap_create ...passed 00:25:38.632 Test: test_ftl_bitmap_get ...passed 00:25:38.632 Test: test_ftl_bitmap_set ...passed 00:25:38.632 Test: test_ftl_bitmap_clear ...[2024-07-22 16:02:42.809298] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:25:38.632 [2024-07-22 16:02:42.809583] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:25:38.632 passed 00:25:38.632 Test: test_ftl_bitmap_find_first_set ...passed 00:25:38.632 Test: test_ftl_bitmap_find_first_clear ...passed 00:25:38.632 Test: test_ftl_bitmap_count_set ...passed 00:25:38.632 00:25:38.632 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.632 suites 1 1 n/a 0 0 00:25:38.632 tests 7 7 7 0 0 00:25:38.632 asserts 137 137 137 0 n/a 00:25:38.632 00:25:38.632 Elapsed time = 0.001 seconds 00:25:38.632 16:02:42 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:25:38.632 00:25:38.632 00:25:38.632 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.632 http://cunit.sourceforge.net/ 00:25:38.632 00:25:38.632 00:25:38.632 Suite: ftl_io_suite 00:25:38.632 Test: test_completion ...passed 00:25:38.632 Test: test_multiple_ios ...passed 00:25:38.632 00:25:38.632 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.632 suites 1 1 n/a 0 0 00:25:38.632 tests 2 2 2 0 0 00:25:38.632 asserts 47 47 47 0 n/a 00:25:38.632 00:25:38.632 Elapsed time = 0.005 seconds 00:25:38.632 16:02:42 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:25:38.632 00:25:38.632 00:25:38.632 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.632 http://cunit.sourceforge.net/ 00:25:38.632 00:25:38.632 00:25:38.632 Suite: ftl_mngt 00:25:38.632 Test: test_next_step ...passed 00:25:38.632 Test: test_continue_step ...passed 00:25:38.632 Test: test_get_func_and_step_cntx_alloc ...passed 00:25:38.632 Test: test_fail_step ...passed 00:25:38.632 Test: test_mngt_call_and_call_rollback ...passed 00:25:38.632 Test: test_nested_process_failure ...passed 00:25:38.632 00:25:38.632 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.632 suites 1 1 n/a 0 0 00:25:38.632 tests 6 6 6 0 0 00:25:38.632 asserts 176 176 176 0 n/a 00:25:38.632 00:25:38.632 Elapsed time = 0.002 seconds 00:25:38.632 16:02:42 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:25:38.894 00:25:38.894 00:25:38.894 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.894 http://cunit.sourceforge.net/ 00:25:38.894 00:25:38.894 00:25:38.894 Suite: ftl_mempool 00:25:38.894 Test: test_ftl_mempool_create ...passed 00:25:38.894 Test: test_ftl_mempool_get_put ...passed 00:25:38.894 00:25:38.894 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.894 suites 1 1 n/a 0 0 00:25:38.894 tests 2 2 2 0 0 00:25:38.894 asserts 36 36 36 0 n/a 00:25:38.894 00:25:38.894 Elapsed time = 0.000 seconds 00:25:38.894 16:02:42 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:25:38.894 00:25:38.894 00:25:38.894 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.894 http://cunit.sourceforge.net/ 00:25:38.894 00:25:38.894 00:25:38.894 Suite: ftl_addr64_suite 00:25:38.894 Test: test_addr_cached ...passed 00:25:38.894 00:25:38.894 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.894 suites 1 1 n/a 0 0 00:25:38.894 tests 1 1 1 0 0 00:25:38.894 asserts 1536 1536 1536 0 n/a 00:25:38.894 00:25:38.894 Elapsed time = 0.000 seconds 00:25:38.894 16:02:42 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:25:38.894 00:25:38.894 00:25:38.894 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.894 http://cunit.sourceforge.net/ 00:25:38.894 00:25:38.894 00:25:38.894 Suite: ftl_sb 00:25:38.894 Test: test_sb_crc_v2 ...passed 00:25:38.894 Test: test_sb_crc_v3 ...passed 00:25:38.894 Test: test_sb_v3_md_layout ...passed 00:25:38.894 Test: test_sb_v5_md_layout ...passed[2024-07-22 16:02:42.965338] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:25:38.894 [2024-07-22 16:02:42.965607] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:25:38.894 [2024-07-22 16:02:42.965648] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:25:38.894 [2024-07-22 16:02:42.965674] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:25:38.894 [2024-07-22 16:02:42.965714] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:25:38.894 [2024-07-22 16:02:42.965747] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:25:38.895 [2024-07-22 16:02:42.965777] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:25:38.895 [2024-07-22 16:02:42.965800] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:25:38.895 [2024-07-22 16:02:42.965877] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:25:38.895 [2024-07-22 16:02:42.965926] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:25:38.895 [2024-07-22 16:02:42.965969] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:25:38.895 00:25:38.895 00:25:38.895 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.895 suites 1 1 n/a 0 0 00:25:38.895 tests 4 4 4 0 0 00:25:38.895 asserts 148 148 148 0 n/a 00:25:38.895 00:25:38.895 Elapsed time = 0.002 seconds 00:25:38.895 16:02:42 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:25:38.895 00:25:38.895 00:25:38.895 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.895 http://cunit.sourceforge.net/ 00:25:38.895 00:25:38.895 00:25:38.895 Suite: ftl_layout_upgrade 00:25:38.895 Test: test_l2p_upgrade ...passed 00:25:38.895 00:25:38.895 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.895 suites 1 1 n/a 0 0 00:25:38.895 tests 1 1 1 0 0 00:25:38.895 asserts 140 140 140 0 n/a 00:25:38.895 00:25:38.895 Elapsed time = 0.001 seconds 00:25:38.895 ************************************ 00:25:38.895 END TEST unittest_ftl 00:25:38.895 ************************************ 00:25:38.895 00:25:38.895 real 0m0.551s 00:25:38.895 user 0m0.216s 00:25:38.895 sys 0m0.330s 00:25:38.895 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.895 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.895 16:02:43 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:25:38.895 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:38.895 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:38.895 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:38.895 ************************************ 00:25:38.895 START TEST unittest_accel 00:25:38.895 ************************************ 00:25:38.895 16:02:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:25:38.895 00:25:38.895 00:25:38.895 CUnit - A unit testing framework for C - Version 2.1-3 00:25:38.895 http://cunit.sourceforge.net/ 00:25:38.895 00:25:38.895 00:25:38.895 Suite: accel_sequence 00:25:38.895 Test: test_sequence_fill_copy ...passed 00:25:38.895 Test: test_sequence_abort ...passed 00:25:38.895 Test: test_sequence_append_error ...passed 00:25:38.895 Test: test_sequence_completion_error ...[2024-07-22 16:02:43.089512] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7997769547c0 00:25:38.895 [2024-07-22 16:02:43.089932] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7997769547c0 00:25:38.895 [2024-07-22 16:02:43.090219] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7997769547c0 00:25:38.895 [2024-07-22 16:02:43.090452] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7997769547c0 00:25:38.895 passed 00:25:38.895 Test: test_sequence_decompress ...passed 00:25:38.895 Test: test_sequence_reverse ...passed 00:25:38.895 Test: test_sequence_copy_elision ...passed 00:25:38.895 Test: test_sequence_accel_buffers ...passed 00:25:38.895 Test: test_sequence_memory_domain ...[2024-07-22 16:02:43.105132] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:25:38.895 [2024-07-22 16:02:43.105475] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:25:38.895 passed 00:25:38.895 Test: test_sequence_module_memory_domain ...passed 00:25:38.895 Test: test_sequence_crypto ...passed 00:25:38.895 Test: test_sequence_driver ...[2024-07-22 16:02:43.113798] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x799773bd07c0 using driver: ut 00:25:38.895 [2024-07-22 16:02:43.114058] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x799773bd07c0 through driver: ut 00:25:38.895 passed 00:25:38.895 Test: test_sequence_same_iovs ...passed 00:25:38.895 Test: test_sequence_crc32 ...passed 00:25:38.895 Suite: accel 00:25:38.895 Test: test_spdk_accel_task_complete ...passed 00:25:38.895 Test: test_get_task ...passed 00:25:38.895 Test: test_spdk_accel_submit_copy ...passed 00:25:38.895 Test: test_spdk_accel_submit_dualcast ...passed 00:25:38.895 Test: test_spdk_accel_submit_compare ...passed 00:25:38.895 Test: test_spdk_accel_submit_fill ...passed 00:25:38.895 Test: test_spdk_accel_submit_crc32c ...passed 00:25:38.895 Test: test_spdk_accel_submit_crc32cv ...passed 00:25:38.895 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:25:38.895 Test: test_spdk_accel_submit_xor ...passed 00:25:38.895 Test: test_spdk_accel_module_find_by_name ...passed 00:25:38.895 Test: test_spdk_accel_module_register ...[2024-07-22 16:02:43.120532] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:25:38.895 [2024-07-22 16:02:43.120591] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:25:38.895 passed 00:25:38.895 00:25:38.895 Run Summary: Type Total Ran Passed Failed Inactive 00:25:38.895 suites 2 2 n/a 0 0 00:25:38.895 tests 26 26 26 0 0 00:25:38.895 asserts 831 831 831 0 n/a 00:25:38.895 00:25:38.895 Elapsed time = 0.041 seconds 00:25:38.895 00:25:38.895 real 0m0.086s 00:25:38.895 user 0m0.048s 00:25:38.895 sys 0m0.034s 00:25:38.895 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:38.895 ************************************ 00:25:38.895 END TEST unittest_accel 00:25:38.895 ************************************ 00:25:38.895 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 16:02:43 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:25:39.157 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.157 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.157 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 ************************************ 00:25:39.157 START TEST unittest_ioat 00:25:39.157 ************************************ 00:25:39.157 16:02:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:25:39.157 00:25:39.157 00:25:39.157 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.157 http://cunit.sourceforge.net/ 00:25:39.157 00:25:39.157 00:25:39.157 Suite: ioat 00:25:39.157 Test: ioat_state_check ...passed 00:25:39.157 00:25:39.157 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.157 suites 1 1 n/a 0 0 00:25:39.157 tests 1 1 1 0 0 00:25:39.157 asserts 32 32 32 0 n/a 00:25:39.157 00:25:39.157 Elapsed time = 0.000 seconds 00:25:39.157 00:25:39.157 real 0m0.031s 00:25:39.157 user 0m0.015s 00:25:39.157 sys 0m0.016s 00:25:39.157 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.157 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 ************************************ 00:25:39.157 END TEST unittest_ioat 00:25:39.157 ************************************ 00:25:39.157 16:02:43 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:39.157 16:02:43 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:25:39.157 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.157 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.157 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 ************************************ 00:25:39.157 START TEST unittest_idxd_user 00:25:39.157 ************************************ 00:25:39.157 16:02:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:25:39.157 00:25:39.157 00:25:39.157 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.157 http://cunit.sourceforge.net/ 00:25:39.157 00:25:39.157 00:25:39.157 Suite: idxd_user 00:25:39.157 Test: test_idxd_wait_cmd ...[2024-07-22 16:02:43.296641] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:25:39.157 passed 00:25:39.157 Test: test_idxd_reset_dev ...[2024-07-22 16:02:43.296839] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:25:39.157 [2024-07-22 16:02:43.296923] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:25:39.157 [2024-07-22 16:02:43.296956] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:25:39.157 passed 00:25:39.157 Test: test_idxd_group_config ...passed 00:25:39.157 Test: test_idxd_wq_config ...passed 00:25:39.157 00:25:39.157 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.157 suites 1 1 n/a 0 0 00:25:39.157 tests 4 4 4 0 0 00:25:39.157 asserts 20 20 20 0 n/a 00:25:39.157 00:25:39.157 Elapsed time = 0.001 seconds 00:25:39.157 ************************************ 00:25:39.157 END TEST unittest_idxd_user 00:25:39.157 ************************************ 00:25:39.157 00:25:39.157 real 0m0.033s 00:25:39.157 user 0m0.014s 00:25:39.157 sys 0m0.019s 00:25:39.157 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.157 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 16:02:43 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:25:39.157 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.157 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.157 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.157 ************************************ 00:25:39.157 START TEST unittest_iscsi 00:25:39.157 ************************************ 00:25:39.157 16:02:43 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:25:39.157 16:02:43 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:25:39.157 00:25:39.157 00:25:39.157 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.157 http://cunit.sourceforge.net/ 00:25:39.157 00:25:39.157 00:25:39.157 Suite: conn_suite 00:25:39.157 Test: read_task_split_in_order_case ...passed 00:25:39.157 Test: read_task_split_reverse_order_case ...passed 00:25:39.157 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:25:39.157 Test: process_non_read_task_completion_test ...passed 00:25:39.157 Test: free_tasks_on_connection ...passed 00:25:39.157 Test: free_tasks_with_queued_datain ...passed 00:25:39.157 Test: abort_queued_datain_task_test ...passed 00:25:39.157 Test: abort_queued_datain_tasks_test ...passed 00:25:39.157 00:25:39.157 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.157 suites 1 1 n/a 0 0 00:25:39.157 tests 8 8 8 0 0 00:25:39.157 asserts 230 230 230 0 n/a 00:25:39.157 00:25:39.157 Elapsed time = 0.001 seconds 00:25:39.157 16:02:43 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:25:39.157 00:25:39.157 00:25:39.157 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.157 http://cunit.sourceforge.net/ 00:25:39.157 00:25:39.157 00:25:39.157 Suite: iscsi_suite 00:25:39.417 Test: param_negotiation_test ...passed 00:25:39.417 Test: list_negotiation_test ...passed 00:25:39.417 Test: parse_valid_test ...passed 00:25:39.417 Test: parse_invalid_test ...[2024-07-22 16:02:43.431859] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:25:39.417 [2024-07-22 16:02:43.432268] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:25:39.417 [2024-07-22 16:02:43.432356] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:25:39.417 [2024-07-22 16:02:43.432436] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:25:39.417 [2024-07-22 16:02:43.432630] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:25:39.417 [2024-07-22 16:02:43.432731] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:25:39.417 [2024-07-22 16:02:43.432852] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:25:39.417 passed 00:25:39.417 00:25:39.417 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.417 suites 1 1 n/a 0 0 00:25:39.417 tests 4 4 4 0 0 00:25:39.417 asserts 161 161 161 0 n/a 00:25:39.417 00:25:39.417 Elapsed time = 0.008 seconds 00:25:39.417 16:02:43 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:25:39.417 00:25:39.417 00:25:39.417 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.417 http://cunit.sourceforge.net/ 00:25:39.417 00:25:39.417 00:25:39.417 Suite: iscsi_target_node_suite 00:25:39.417 Test: add_lun_test_cases ...[2024-07-22 16:02:43.462870] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:25:39.417 [2024-07-22 16:02:43.463105] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:25:39.417 [2024-07-22 16:02:43.463143] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:25:39.417 [2024-07-22 16:02:43.463171] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:25:39.417 passed 00:25:39.417 Test: allow_any_allowed ...passed 00:25:39.417 Test: allow_ipv6_allowed ...passed 00:25:39.417 Test: allow_ipv6_denied ...passed 00:25:39.417 Test: allow_ipv6_invalid ...passed 00:25:39.417 Test: allow_ipv4_allowed ...passed 00:25:39.417 Test: allow_ipv4_denied ...passed 00:25:39.417 Test: allow_ipv4_invalid ...passed[2024-07-22 16:02:43.463200] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:25:39.417 00:25:39.417 Test: node_access_allowed ...passed 00:25:39.417 Test: node_access_denied_by_empty_netmask ...passed 00:25:39.417 Test: node_access_multi_initiator_groups_cases ...passed 00:25:39.417 Test: allow_iscsi_name_multi_maps_case ...passed 00:25:39.417 Test: chap_param_test_cases ...[2024-07-22 16:02:43.463718] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:25:39.417 [2024-07-22 16:02:43.463760] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:25:39.418 [2024-07-22 16:02:43.463791] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:25:39.418 [2024-07-22 16:02:43.463835] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:25:39.418 [2024-07-22 16:02:43.463866] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:25:39.418 passed 00:25:39.418 00:25:39.418 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.418 suites 1 1 n/a 0 0 00:25:39.418 tests 13 13 13 0 0 00:25:39.418 asserts 50 50 50 0 n/a 00:25:39.418 00:25:39.418 Elapsed time = 0.001 seconds 00:25:39.418 16:02:43 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:25:39.418 00:25:39.418 00:25:39.418 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.418 http://cunit.sourceforge.net/ 00:25:39.418 00:25:39.418 00:25:39.418 Suite: iscsi_suite 00:25:39.418 Test: op_login_check_target_test ...[2024-07-22 16:02:43.501485] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:25:39.418 passed 00:25:39.418 Test: op_login_session_normal_test ...[2024-07-22 16:02:43.501827] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:25:39.418 [2024-07-22 16:02:43.501873] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:25:39.418 [2024-07-22 16:02:43.501906] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:25:39.418 [2024-07-22 16:02:43.501946] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:25:39.418 [2024-07-22 16:02:43.502012] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:25:39.418 passed 00:25:39.418 Test: maxburstlength_test ...[2024-07-22 16:02:43.502081] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:25:39.418 [2024-07-22 16:02:43.502121] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:25:39.418 [2024-07-22 16:02:43.502402] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:25:39.418 [2024-07-22 16:02:43.502455] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:25:39.418 passed 00:25:39.418 Test: underflow_for_read_transfer_test ...passed 00:25:39.418 Test: underflow_for_zero_read_transfer_test ...passed 00:25:39.418 Test: underflow_for_request_sense_test ...passed 00:25:39.418 Test: underflow_for_check_condition_test ...passed 00:25:39.418 Test: add_transfer_task_test ...passed 00:25:39.418 Test: get_transfer_task_test ...passed 00:25:39.418 Test: del_transfer_task_test ...passed 00:25:39.418 Test: clear_all_transfer_tasks_test ...passed 00:25:39.418 Test: build_iovs_test ...passed 00:25:39.418 Test: build_iovs_with_md_test ...passed 00:25:39.418 Test: pdu_hdr_op_login_test ...[2024-07-22 16:02:43.504050] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:25:39.418 [2024-07-22 16:02:43.504160] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:25:39.418 [2024-07-22 16:02:43.504226] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:25:39.418 passed 00:25:39.418 Test: pdu_hdr_op_text_test ...[2024-07-22 16:02:43.504341] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:25:39.418 [2024-07-22 16:02:43.504410] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:25:39.418 [2024-07-22 16:02:43.504449] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:25:39.418 passed 00:25:39.418 Test: pdu_hdr_op_logout_test ...passed 00:25:39.418 Test: pdu_hdr_op_scsi_test ...[2024-07-22 16:02:43.504522] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:25:39.418 [2024-07-22 16:02:43.504632] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:25:39.418 [2024-07-22 16:02:43.504671] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:25:39.418 [2024-07-22 16:02:43.504711] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:25:39.418 [2024-07-22 16:02:43.504793] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:25:39.418 [2024-07-22 16:02:43.504859] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:25:39.418 passed 00:25:39.418 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-22 16:02:43.505047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:25:39.418 [2024-07-22 16:02:43.505161] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:25:39.418 [2024-07-22 16:02:43.505237] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:25:39.418 passed 00:25:39.418 Test: pdu_hdr_op_nopout_test ...[2024-07-22 16:02:43.505465] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:25:39.418 passed 00:25:39.418 Test: pdu_hdr_op_data_test ...[2024-07-22 16:02:43.505536] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:25:39.418 [2024-07-22 16:02:43.505576] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:25:39.418 [2024-07-22 16:02:43.505608] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:25:39.418 [2024-07-22 16:02:43.505669] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:25:39.418 [2024-07-22 16:02:43.505753] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:25:39.418 [2024-07-22 16:02:43.505823] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:25:39.418 [2024-07-22 16:02:43.505848] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:25:39.418 [2024-07-22 16:02:43.505907] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:25:39.418 [2024-07-22 16:02:43.505960] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:25:39.418 passed 00:25:39.418 Test: empty_text_with_cbit_test ...passed 00:25:39.418 Test: pdu_payload_read_test ...[2024-07-22 16:02:43.506008] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:25:39.418 [2024-07-22 16:02:43.508156] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:25:39.418 passed 00:25:39.418 Test: data_out_pdu_sequence_test ...passed 00:25:39.418 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:25:39.418 00:25:39.418 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.418 suites 1 1 n/a 0 0 00:25:39.418 tests 24 24 24 0 0 00:25:39.418 asserts 150253 150253 150253 0 n/a 00:25:39.418 00:25:39.418 Elapsed time = 0.017 seconds 00:25:39.418 16:02:43 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:25:39.418 00:25:39.418 00:25:39.418 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.418 http://cunit.sourceforge.net/ 00:25:39.418 00:25:39.418 00:25:39.418 Suite: init_grp_suite 00:25:39.418 Test: create_initiator_group_success_case ...passed 00:25:39.418 Test: find_initiator_group_success_case ...passed 00:25:39.418 Test: register_initiator_group_twice_case ...passed 00:25:39.418 Test: add_initiator_name_success_case ...passed 00:25:39.418 Test: add_initiator_name_fail_case ...[2024-07-22 16:02:43.545311] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:25:39.418 passed 00:25:39.418 Test: delete_all_initiator_names_success_case ...passed 00:25:39.418 Test: add_netmask_success_case ...passed 00:25:39.418 Test: add_netmask_fail_case ...passed 00:25:39.418 Test: delete_all_netmasks_success_case ...[2024-07-22 16:02:43.545621] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:25:39.418 passed 00:25:39.418 Test: initiator_name_overwrite_all_to_any_case ...passed 00:25:39.418 Test: netmask_overwrite_all_to_any_case ...passed 00:25:39.418 Test: add_delete_initiator_names_case ...passed 00:25:39.418 Test: add_duplicated_initiator_names_case ...passed 00:25:39.418 Test: delete_nonexisting_initiator_names_case ...passed 00:25:39.418 Test: add_delete_netmasks_case ...passed 00:25:39.418 Test: add_duplicated_netmasks_case ...passed 00:25:39.418 Test: delete_nonexisting_netmasks_case ...passed 00:25:39.418 00:25:39.418 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.418 suites 1 1 n/a 0 0 00:25:39.418 tests 17 17 17 0 0 00:25:39.418 asserts 108 108 108 0 n/a 00:25:39.418 00:25:39.418 Elapsed time = 0.001 seconds 00:25:39.418 16:02:43 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:25:39.418 00:25:39.418 00:25:39.418 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.418 http://cunit.sourceforge.net/ 00:25:39.418 00:25:39.418 00:25:39.418 Suite: portal_grp_suite 00:25:39.418 Test: portal_create_ipv4_normal_case ...passed 00:25:39.418 Test: portal_create_ipv6_normal_case ...passed 00:25:39.418 Test: portal_create_ipv4_wildcard_case ...passed 00:25:39.418 Test: portal_create_ipv6_wildcard_case ...passed 00:25:39.418 Test: portal_create_twice_case ...passed 00:25:39.418 Test: portal_grp_register_unregister_case ...passed 00:25:39.418 Test: portal_grp_register_twice_case ...[2024-07-22 16:02:43.576579] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:25:39.418 passed 00:25:39.418 Test: portal_grp_add_delete_case ...passed 00:25:39.419 Test: portal_grp_add_delete_twice_case ...passed 00:25:39.419 00:25:39.419 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.419 suites 1 1 n/a 0 0 00:25:39.419 tests 9 9 9 0 0 00:25:39.419 asserts 44 44 44 0 n/a 00:25:39.419 00:25:39.419 Elapsed time = 0.003 seconds 00:25:39.419 00:25:39.419 real 0m0.232s 00:25:39.419 user 0m0.117s 00:25:39.419 sys 0m0.116s 00:25:39.419 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.419 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 ************************************ 00:25:39.419 END TEST unittest_iscsi 00:25:39.419 ************************************ 00:25:39.419 16:02:43 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:25:39.419 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.419 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.419 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.419 ************************************ 00:25:39.419 START TEST unittest_json 00:25:39.419 ************************************ 00:25:39.419 16:02:43 -- common/autotest_common.sh@1104 -- # unittest_json 00:25:39.419 16:02:43 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:25:39.419 00:25:39.419 00:25:39.419 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.419 http://cunit.sourceforge.net/ 00:25:39.419 00:25:39.419 00:25:39.419 Suite: json 00:25:39.419 Test: test_parse_literal ...passed 00:25:39.419 Test: test_parse_string_simple ...passed 00:25:39.419 Test: test_parse_string_control_chars ...passed 00:25:39.419 Test: test_parse_string_utf8 ...passed 00:25:39.419 Test: test_parse_string_escapes_twochar ...passed 00:25:39.419 Test: test_parse_string_escapes_unicode ...passed 00:25:39.419 Test: test_parse_number ...passed 00:25:39.419 Test: test_parse_array ...passed 00:25:39.419 Test: test_parse_object ...passed 00:25:39.419 Test: test_parse_nesting ...passed 00:25:39.419 Test: test_parse_comment ...passed 00:25:39.419 00:25:39.419 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.419 suites 1 1 n/a 0 0 00:25:39.419 tests 11 11 11 0 0 00:25:39.419 asserts 1516 1516 1516 0 n/a 00:25:39.419 00:25:39.419 Elapsed time = 0.003 seconds 00:25:39.419 16:02:43 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:25:39.419 00:25:39.419 00:25:39.419 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.419 http://cunit.sourceforge.net/ 00:25:39.419 00:25:39.419 00:25:39.419 Suite: json 00:25:39.419 Test: test_strequal ...passed 00:25:39.678 Test: test_num_to_uint16 ...passed 00:25:39.678 Test: test_num_to_int32 ...passed 00:25:39.678 Test: test_num_to_uint64 ...passed 00:25:39.678 Test: test_decode_object ...passed 00:25:39.678 Test: test_decode_array ...passed 00:25:39.678 Test: test_decode_bool ...passed 00:25:39.678 Test: test_decode_uint16 ...passed 00:25:39.678 Test: test_decode_int32 ...passed 00:25:39.678 Test: test_decode_uint32 ...passed 00:25:39.678 Test: test_decode_uint64 ...passed 00:25:39.678 Test: test_decode_string ...passed 00:25:39.678 Test: test_decode_uuid ...passed 00:25:39.678 Test: test_find ...passed 00:25:39.678 Test: test_find_array ...passed 00:25:39.678 Test: test_iterating ...passed 00:25:39.678 Test: test_free_object ...passed 00:25:39.678 00:25:39.678 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.678 suites 1 1 n/a 0 0 00:25:39.678 tests 17 17 17 0 0 00:25:39.678 asserts 236 236 236 0 n/a 00:25:39.678 00:25:39.678 Elapsed time = 0.001 seconds 00:25:39.678 16:02:43 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:25:39.678 00:25:39.678 00:25:39.678 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.678 http://cunit.sourceforge.net/ 00:25:39.678 00:25:39.678 00:25:39.678 Suite: json 00:25:39.678 Test: test_write_literal ...passed 00:25:39.678 Test: test_write_string_simple ...passed 00:25:39.678 Test: test_write_string_escapes ...passed 00:25:39.678 Test: test_write_string_utf16le ...passed 00:25:39.678 Test: test_write_number_int32 ...passed 00:25:39.678 Test: test_write_number_uint32 ...passed 00:25:39.678 Test: test_write_number_uint128 ...passed 00:25:39.678 Test: test_write_string_number_uint128 ...passed 00:25:39.678 Test: test_write_number_int64 ...passed 00:25:39.678 Test: test_write_number_uint64 ...passed 00:25:39.679 Test: test_write_number_double ...passed 00:25:39.679 Test: test_write_uuid ...passed 00:25:39.679 Test: test_write_array ...passed 00:25:39.679 Test: test_write_object ...passed 00:25:39.679 Test: test_write_nesting ...passed 00:25:39.679 Test: test_write_val ...passed 00:25:39.679 00:25:39.679 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.679 suites 1 1 n/a 0 0 00:25:39.679 tests 16 16 16 0 0 00:25:39.679 asserts 918 918 918 0 n/a 00:25:39.679 00:25:39.679 Elapsed time = 0.005 seconds 00:25:39.679 16:02:43 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:25:39.679 00:25:39.679 00:25:39.679 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.679 http://cunit.sourceforge.net/ 00:25:39.679 00:25:39.679 00:25:39.679 Suite: jsonrpc 00:25:39.679 Test: test_parse_request ...passed 00:25:39.679 Test: test_parse_request_streaming ...passed 00:25:39.679 00:25:39.679 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.679 suites 1 1 n/a 0 0 00:25:39.679 tests 2 2 2 0 0 00:25:39.679 asserts 289 289 289 0 n/a 00:25:39.679 00:25:39.679 Elapsed time = 0.004 seconds 00:25:39.679 00:25:39.679 real 0m0.124s 00:25:39.679 user 0m0.064s 00:25:39.679 sys 0m0.062s 00:25:39.679 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.679 ************************************ 00:25:39.679 END TEST unittest_json 00:25:39.679 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:25:39.679 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.679 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.679 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.679 ************************************ 00:25:39.679 START TEST unittest_rpc 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:25:39.679 16:02:43 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:25:39.679 00:25:39.679 00:25:39.679 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.679 http://cunit.sourceforge.net/ 00:25:39.679 00:25:39.679 00:25:39.679 Suite: rpc 00:25:39.679 Test: test_jsonrpc_handler ...passed 00:25:39.679 Test: test_spdk_rpc_is_method_allowed ...passed 00:25:39.679 Test: test_rpc_get_methods ...[2024-07-22 16:02:43.830946] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:25:39.679 passed 00:25:39.679 Test: test_rpc_spdk_get_version ...passed 00:25:39.679 Test: test_spdk_rpc_listen_close ...passed 00:25:39.679 00:25:39.679 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.679 suites 1 1 n/a 0 0 00:25:39.679 tests 5 5 5 0 0 00:25:39.679 asserts 20 20 20 0 n/a 00:25:39.679 00:25:39.679 Elapsed time = 0.001 seconds 00:25:39.679 00:25:39.679 real 0m0.034s 00:25:39.679 user 0m0.020s 00:25:39.679 sys 0m0.014s 00:25:39.679 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.679 END TEST unittest_rpc 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:25:39.679 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.679 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.679 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.679 ************************************ 00:25:39.679 START TEST unittest_notify 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:25:39.679 00:25:39.679 00:25:39.679 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.679 http://cunit.sourceforge.net/ 00:25:39.679 00:25:39.679 00:25:39.679 Suite: app_suite 00:25:39.679 Test: notify ...passed 00:25:39.679 00:25:39.679 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.679 suites 1 1 n/a 0 0 00:25:39.679 tests 1 1 1 0 0 00:25:39.679 asserts 13 13 13 0 n/a 00:25:39.679 00:25:39.679 Elapsed time = 0.000 seconds 00:25:39.679 00:25:39.679 real 0m0.028s 00:25:39.679 user 0m0.016s 00:25:39.679 sys 0m0.013s 00:25:39.679 16:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.679 ************************************ 00:25:39.679 END TEST unittest_notify 00:25:39.679 ************************************ 00:25:39.679 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.939 16:02:43 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:25:39.939 16:02:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:39.939 16:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.939 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:25:39.939 ************************************ 00:25:39.939 START TEST unittest_nvme 00:25:39.939 ************************************ 00:25:39.939 16:02:43 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:25:39.939 16:02:43 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:25:39.939 00:25:39.939 00:25:39.939 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.939 http://cunit.sourceforge.net/ 00:25:39.939 00:25:39.939 00:25:39.939 Suite: nvme 00:25:39.939 Test: test_opc_data_transfer ...passed 00:25:39.939 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:25:39.939 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:25:39.939 Test: test_trid_parse_and_compare ...[2024-07-22 16:02:43.987973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:25:39.939 [2024-07-22 16:02:43.988231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:25:39.939 [2024-07-22 16:02:43.988286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:25:39.939 [2024-07-22 16:02:43.988321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:25:39.939 [2024-07-22 16:02:43.988359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:25:39.939 [2024-07-22 16:02:43.988400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:25:39.939 passed 00:25:39.939 Test: test_trid_trtype_str ...passed 00:25:39.939 Test: test_trid_adrfam_str ...passed 00:25:39.939 Test: test_nvme_ctrlr_probe ...[2024-07-22 16:02:43.988679] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:25:39.939 passed 00:25:39.939 Test: test_spdk_nvme_probe ...[2024-07-22 16:02:43.988775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:25:39.939 [2024-07-22 16:02:43.988811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:39.939 [2024-07-22 16:02:43.988955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:25:39.939 passed 00:25:39.939 Test: test_spdk_nvme_connect ...[2024-07-22 16:02:43.989031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:39.939 [2024-07-22 16:02:43.989122] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:25:39.939 passed 00:25:39.939 Test: test_nvme_ctrlr_probe_internal ...[2024-07-22 16:02:43.989525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:25:39.939 [2024-07-22 16:02:43.989573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:25:39.939 [2024-07-22 16:02:43.989745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:25:39.939 [2024-07-22 16:02:43.989787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:39.939 passed 00:25:39.939 Test: test_nvme_init_controllers ...[2024-07-22 16:02:43.989912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:25:39.939 passed 00:25:39.939 Test: test_nvme_driver_init ...[2024-07-22 16:02:43.990082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:25:39.939 [2024-07-22 16:02:43.990119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:25:39.939 [2024-07-22 16:02:44.103686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:25:39.939 passed 00:25:39.939 Test: test_spdk_nvme_detach ...passed 00:25:39.939 Test: test_nvme_completion_poll_cb ...passed 00:25:39.939 Test: test_nvme_user_copy_cmd_complete ...[2024-07-22 16:02:44.103902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:25:39.939 passed 00:25:39.939 Test: test_nvme_allocate_request_null ...passed 00:25:39.939 Test: test_nvme_allocate_request ...passed 00:25:39.939 Test: test_nvme_free_request ...passed 00:25:39.939 Test: test_nvme_allocate_request_user_copy ...passed 00:25:39.939 Test: test_nvme_robust_mutex_init_shared ...passed 00:25:39.939 Test: test_nvme_request_check_timeout ...passed 00:25:39.939 Test: test_nvme_wait_for_completion ...passed 00:25:39.939 Test: test_spdk_nvme_parse_func ...passed 00:25:39.939 Test: test_spdk_nvme_detach_async ...passed 00:25:39.939 Test: test_nvme_parse_addr ...[2024-07-22 16:02:44.105098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:25:39.939 passed 00:25:39.939 00:25:39.939 Run Summary: Type Total Ran Passed Failed Inactive 00:25:39.939 suites 1 1 n/a 0 0 00:25:39.939 tests 25 25 25 0 0 00:25:39.939 asserts 326 326 326 0 n/a 00:25:39.939 00:25:39.939 Elapsed time = 0.007 seconds 00:25:39.939 16:02:44 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:25:39.939 00:25:39.939 00:25:39.939 CUnit - A unit testing framework for C - Version 2.1-3 00:25:39.939 http://cunit.sourceforge.net/ 00:25:39.939 00:25:39.939 00:25:39.939 Suite: nvme_ctrlr 00:25:39.939 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-22 16:02:44.141367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-22 16:02:44.143735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-22 16:02:44.145124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-22 16:02:44.146535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-22 16:02:44.147941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 [2024-07-22 16:02:44.149194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-22 16:02:44.150447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-22 16:02:44.151653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-22 16:02:44.154227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 [2024-07-22 16:02:44.156626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-22 16:02:44.157864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:25:39.939 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-22 16:02:44.160454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.939 [2024-07-22 16:02:44.161704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-22 16:02:44.164053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:25:39.940 Test: test_nvme_ctrlr_init_delay ...[2024-07-22 16:02:44.166634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.940 passed 00:25:39.940 Test: test_alloc_io_qpair_rr_1 ...[2024-07-22 16:02:44.168045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.940 [2024-07-22 16:02:44.168283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:25:39.940 [2024-07-22 16:02:44.168383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:25:39.940 passed 00:25:39.940 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:25:39.940 Test: test_ctrlr_get_default_io_qpair_opts ...[2024-07-22 16:02:44.168467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:25:39.940 [2024-07-22 16:02:44.168541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:25:39.940 passed 00:25:39.940 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-22 16:02:44.168710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.940 passed 00:25:39.940 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-22 16:02:44.168930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:39.940 [2024-07-22 16:02:44.169145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:25:39.940 passed 00:25:39.940 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-22 16:02:44.169435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:25:39.940 [2024-07-22 16:02:44.169527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:25:39.940 [2024-07-22 16:02:44.169621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:25:39.940 [2024-07-22 16:02:44.169709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:25:39.940 passed 00:25:39.940 Test: test_nvme_ctrlr_fail ...passed 00:25:39.940 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:25:39.940 Test: test_nvme_ctrlr_set_supported_features ...passed 00:25:39.940 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...[2024-07-22 16:02:44.169780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:25:39.940 passed 00:25:39.940 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-22 16:02:44.170097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.198 passed 00:25:40.198 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:25:40.198 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:25:40.198 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:25:40.198 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-22 16:02:44.465465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-22 16:02:44.472797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-22 16:02:44.474076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 [2024-07-22 16:02:44.474127] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:25:40.458 passed 00:25:40.458 Test: test_alloc_io_qpair_fail ...[2024-07-22 16:02:44.475276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_add_remove_process ...passed 00:25:40.458 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:25:40.458 Test: test_nvme_ctrlr_set_state ...passed 00:25:40.458 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-22 16:02:44.475354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:25:40.458 [2024-07-22 16:02:44.475492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:25:40.458 [2024-07-22 16:02:44.475544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-22 16:02:44.497981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-22 16:02:44.539469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_reset ...[2024-07-22 16:02:44.541052] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_aer_callback ...[2024-07-22 16:02:44.541410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-22 16:02:44.542836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:25:40.458 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:25:40.458 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-22 16:02:44.544663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:25:40.458 Test: test_nvme_ctrlr_ana_resize ...[2024-07-22 16:02:44.546145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:25:40.458 Test: test_nvme_transport_ctrlr_ready ...[2024-07-22 16:02:44.547717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:25:40.458 [2024-07-22 16:02:44.547765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:25:40.458 passed 00:25:40.458 Test: test_nvme_ctrlr_disable ...[2024-07-22 16:02:44.547801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:25:40.458 passed 00:25:40.458 00:25:40.458 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.458 suites 1 1 n/a 0 0 00:25:40.458 tests 43 43 43 0 0 00:25:40.458 asserts 10418 10418 10418 0 n/a 00:25:40.458 00:25:40.458 Elapsed time = 0.367 seconds 00:25:40.458 16:02:44 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:25:40.458 00:25:40.458 00:25:40.458 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.458 http://cunit.sourceforge.net/ 00:25:40.458 00:25:40.458 00:25:40.458 Suite: nvme_ctrlr_cmd 00:25:40.458 Test: test_get_log_pages ...passed 00:25:40.458 Test: test_set_feature_cmd ...passed 00:25:40.458 Test: test_set_feature_ns_cmd ...passed 00:25:40.458 Test: test_get_feature_cmd ...passed 00:25:40.458 Test: test_get_feature_ns_cmd ...passed 00:25:40.458 Test: test_abort_cmd ...passed 00:25:40.458 Test: test_set_host_id_cmds ...[2024-07-22 16:02:44.591359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:25:40.458 passed 00:25:40.458 Test: test_io_cmd_raw_no_payload_build ...passed 00:25:40.458 Test: test_io_raw_cmd ...passed 00:25:40.458 Test: test_io_raw_cmd_with_md ...passed 00:25:40.458 Test: test_namespace_attach ...passed 00:25:40.458 Test: test_namespace_detach ...passed 00:25:40.458 Test: test_namespace_create ...passed 00:25:40.458 Test: test_namespace_delete ...passed 00:25:40.458 Test: test_doorbell_buffer_config ...passed 00:25:40.458 Test: test_format_nvme ...passed 00:25:40.458 Test: test_fw_commit ...passed 00:25:40.458 Test: test_fw_image_download ...passed 00:25:40.458 Test: test_sanitize ...passed 00:25:40.458 Test: test_directive ...passed 00:25:40.458 Test: test_nvme_request_add_abort ...passed 00:25:40.458 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:25:40.458 Test: test_nvme_ctrlr_cmd_identify ...passed 00:25:40.459 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:25:40.459 00:25:40.459 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.459 suites 1 1 n/a 0 0 00:25:40.459 tests 24 24 24 0 0 00:25:40.459 asserts 198 198 198 0 n/a 00:25:40.459 00:25:40.459 Elapsed time = 0.001 seconds 00:25:40.459 16:02:44 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:25:40.459 00:25:40.459 00:25:40.459 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.459 http://cunit.sourceforge.net/ 00:25:40.459 00:25:40.459 00:25:40.459 Suite: nvme_ctrlr_cmd 00:25:40.459 Test: test_geometry_cmd ...passed 00:25:40.459 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:25:40.459 00:25:40.459 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.459 suites 1 1 n/a 0 0 00:25:40.459 tests 2 2 2 0 0 00:25:40.459 asserts 7 7 7 0 n/a 00:25:40.459 00:25:40.459 Elapsed time = 0.000 seconds 00:25:40.459 16:02:44 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:25:40.459 00:25:40.459 00:25:40.459 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.459 http://cunit.sourceforge.net/ 00:25:40.459 00:25:40.459 00:25:40.459 Suite: nvme 00:25:40.459 Test: test_nvme_ns_construct ...passed 00:25:40.459 Test: test_nvme_ns_uuid ...passed 00:25:40.459 Test: test_nvme_ns_csi ...passed 00:25:40.459 Test: test_nvme_ns_data ...passed 00:25:40.459 Test: test_nvme_ns_set_identify_data ...passed 00:25:40.459 Test: test_spdk_nvme_ns_get_values ...passed 00:25:40.459 Test: test_spdk_nvme_ns_is_active ...passed 00:25:40.459 Test: spdk_nvme_ns_supports ...passed 00:25:40.459 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:25:40.459 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:25:40.459 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:25:40.459 Test: test_nvme_ns_find_id_desc ...passed 00:25:40.459 00:25:40.459 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.459 suites 1 1 n/a 0 0 00:25:40.459 tests 12 12 12 0 0 00:25:40.459 asserts 83 83 83 0 n/a 00:25:40.459 00:25:40.459 Elapsed time = 0.001 seconds 00:25:40.459 16:02:44 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:25:40.459 00:25:40.459 00:25:40.459 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.459 http://cunit.sourceforge.net/ 00:25:40.459 00:25:40.459 00:25:40.459 Suite: nvme_ns_cmd 00:25:40.459 Test: split_test ...passed 00:25:40.459 Test: split_test2 ...passed 00:25:40.459 Test: split_test3 ...passed 00:25:40.459 Test: split_test4 ...passed 00:25:40.459 Test: test_nvme_ns_cmd_flush ...passed 00:25:40.459 Test: test_nvme_ns_cmd_dataset_management ...passed 00:25:40.459 Test: test_nvme_ns_cmd_copy ...passed 00:25:40.459 Test: test_io_flags ...[2024-07-22 16:02:44.684781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:25:40.459 passed 00:25:40.459 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:25:40.459 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:25:40.459 Test: test_nvme_ns_cmd_reservation_register ...passed 00:25:40.459 Test: test_nvme_ns_cmd_reservation_release ...passed 00:25:40.459 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:25:40.459 Test: test_nvme_ns_cmd_reservation_report ...passed 00:25:40.459 Test: test_cmd_child_request ...passed 00:25:40.459 Test: test_nvme_ns_cmd_readv ...passed 00:25:40.459 Test: test_nvme_ns_cmd_read_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_writev ...[2024-07-22 16:02:44.686450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:25:40.459 passed 00:25:40.459 Test: test_nvme_ns_cmd_write_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_comparev ...passed 00:25:40.459 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:25:40.459 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:25:40.459 Test: test_nvme_ns_cmd_setup_request ...passed 00:25:40.459 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:25:40.459 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-22 16:02:44.688758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:25:40.459 passed 00:25:40.459 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-22 16:02:44.688938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:25:40.459 passed 00:25:40.459 Test: test_nvme_ns_cmd_verify ...passed 00:25:40.459 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:25:40.459 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:25:40.459 00:25:40.459 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.459 suites 1 1 n/a 0 0 00:25:40.459 tests 32 32 32 0 0 00:25:40.459 asserts 550 550 550 0 n/a 00:25:40.459 00:25:40.459 Elapsed time = 0.006 seconds 00:25:40.459 16:02:44 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:25:40.459 00:25:40.459 00:25:40.459 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.459 http://cunit.sourceforge.net/ 00:25:40.459 00:25:40.459 00:25:40.459 Suite: nvme_ns_cmd 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:25:40.459 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:25:40.459 00:25:40.459 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.459 suites 1 1 n/a 0 0 00:25:40.459 tests 12 12 12 0 0 00:25:40.459 asserts 123 123 123 0 n/a 00:25:40.459 00:25:40.459 Elapsed time = 0.001 seconds 00:25:40.719 16:02:44 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:25:40.719 00:25:40.719 00:25:40.719 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.719 http://cunit.sourceforge.net/ 00:25:40.719 00:25:40.719 00:25:40.719 Suite: nvme_qpair 00:25:40.719 Test: test3 ...passed 00:25:40.719 Test: test_ctrlr_failed ...passed 00:25:40.719 Test: struct_packing ...passed 00:25:40.719 Test: test_nvme_qpair_process_completions ...[2024-07-22 16:02:44.759073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.719 [2024-07-22 16:02:44.759348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.719 [2024-07-22 16:02:44.759427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:25:40.719 [2024-07-22 16:02:44.759466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.719 passed 00:25:40.719 Test: test_nvme_completion_is_retry ...passed 00:25:40.719 Test: test_get_status_string ...passed 00:25:40.719 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:25:40.719 Test: test_nvme_qpair_submit_request ...passed 00:25:40.719 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:25:40.719 Test: test_nvme_qpair_manual_complete_request ...passed 00:25:40.719 Test: test_nvme_qpair_init_deinit ...[2024-07-22 16:02:44.760022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.719 passed 00:25:40.719 Test: test_nvme_get_sgl_print_info ...passed 00:25:40.719 00:25:40.719 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.719 suites 1 1 n/a 0 0 00:25:40.719 tests 12 12 12 0 0 00:25:40.719 asserts 154 154 154 0 n/a 00:25:40.719 00:25:40.719 Elapsed time = 0.002 seconds 00:25:40.719 16:02:44 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:25:40.719 00:25:40.719 00:25:40.719 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.719 http://cunit.sourceforge.net/ 00:25:40.719 00:25:40.719 00:25:40.719 Suite: nvme_pcie 00:25:40.719 Test: test_prp_list_append ...[2024-07-22 16:02:44.788057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:25:40.719 [2024-07-22 16:02:44.788353] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:25:40.719 [2024-07-22 16:02:44.788416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:25:40.719 [2024-07-22 16:02:44.788652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:25:40.719 passed 00:25:40.719 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-22 16:02:44.788779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:25:40.719 passed 00:25:40.719 Test: test_shadow_doorbell_update ...passed 00:25:40.719 Test: test_build_contig_hw_sgl_request ...passed 00:25:40.719 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:25:40.719 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:25:40.719 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:25:40.719 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-07-22 16:02:44.789184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:25:40.719 passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-22 16:02:44.789420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:25:40.719 [2024-07-22 16:02:44.789534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:25:40.719 passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:25:40.719 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-22 16:02:44.789634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:25:40.719 passed 00:25:40.719 00:25:40.719 [2024-07-22 16:02:44.789723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:25:40.719 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.719 suites 1 1 n/a 0 0 00:25:40.720 tests 14 14 14 0 0 00:25:40.720 asserts 235 235 235 0 n/a 00:25:40.720 00:25:40.720 Elapsed time = 0.002 seconds 00:25:40.720 16:02:44 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:25:40.720 00:25:40.720 00:25:40.720 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.720 http://cunit.sourceforge.net/ 00:25:40.720 00:25:40.720 00:25:40.720 Suite: nvme_ns_cmd 00:25:40.720 Test: nvme_poll_group_create_test ...passed 00:25:40.720 Test: nvme_poll_group_add_remove_test ...passed 00:25:40.720 Test: nvme_poll_group_process_completions ...passed 00:25:40.720 Test: nvme_poll_group_destroy_test ...passed 00:25:40.720 Test: nvme_poll_group_get_free_stats ...passed 00:25:40.720 00:25:40.720 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.720 suites 1 1 n/a 0 0 00:25:40.720 tests 5 5 5 0 0 00:25:40.720 asserts 75 75 75 0 n/a 00:25:40.720 00:25:40.720 Elapsed time = 0.001 seconds 00:25:40.720 16:02:44 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:25:40.720 00:25:40.720 00:25:40.720 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.720 http://cunit.sourceforge.net/ 00:25:40.720 00:25:40.720 00:25:40.720 Suite: nvme_quirks 00:25:40.720 Test: test_nvme_quirks_striping ...passed 00:25:40.720 00:25:40.720 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.720 suites 1 1 n/a 0 0 00:25:40.720 tests 1 1 1 0 0 00:25:40.720 asserts 5 5 5 0 n/a 00:25:40.720 00:25:40.720 Elapsed time = 0.000 seconds 00:25:40.720 16:02:44 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:25:40.720 00:25:40.720 00:25:40.720 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.720 http://cunit.sourceforge.net/ 00:25:40.720 00:25:40.720 00:25:40.720 Suite: nvme_tcp 00:25:40.720 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:25:40.720 Test: test_nvme_tcp_build_iovs ...passed 00:25:40.720 Test: test_nvme_tcp_build_sgl_request ...[2024-07-22 16:02:44.880330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x71e9cd80d2e0, and the iovcnt=16, remaining_size=28672 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:25:40.720 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:25:40.720 Test: test_nvme_tcp_req_complete_safe ...passed 00:25:40.720 Test: test_nvme_tcp_req_get ...passed 00:25:40.720 Test: test_nvme_tcp_req_init ...passed 00:25:40.720 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:25:40.720 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:25:40.720 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:25:40.720 Test: test_nvme_tcp_alloc_reqs ...[2024-07-22 16:02:44.881091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd309030 is same with the state(6) to be set 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-22 16:02:44.881523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd709070 is same with the state(5) to be set 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-22 16:02:44.881617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x71e9cd60a6e0 00:25:40.720 [2024-07-22 16:02:44.881673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:25:40.720 [2024-07-22 16:02:44.881696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.881736] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:25:40.720 [2024-07-22 16:02:44.881801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.881850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.720 [2024-07-22 16:02:44.881907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.881946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.882005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.882040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.882083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-22 16:02:44.882138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd60a070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.882384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:25:40.720 [2024-07-22 16:02:44.882449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:25:40.720 [2024-07-22 16:02:44.882858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:25:40.720 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:25:40.720 Test: test_nvme_tcp_icresp_handle ...[2024-07-22 16:02:44.882964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x71e9cd60b540): PDU Sequence Error 00:25:40.720 [2024-07-22 16:02:44.883051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:25:40.720 [2024-07-22 16:02:44.883109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:25:40.720 [2024-07-22 16:02:44.883163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd70d070 is same with the state(5) to be set 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:25:40.720 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-22 16:02:44.883211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:25:40.720 [2024-07-22 16:02:44.883250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd70d070 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.883292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd70d070 is same with the state(0) to be set 00:25:40.720 [2024-07-22 16:02:44.883358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x71e9cd60c540): PDU Sequence Error 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:25:40.720 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-22 16:02:44.883478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x71e9cd70f200 00:25:40.720 [2024-07-22 16:02:44.883672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x71e9cd825480, errno=0, rc=0 00:25:40.720 [2024-07-22 16:02:44.883720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd825480 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.883773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71e9cd825480 is same with the state(5) to be set 00:25:40.720 [2024-07-22 16:02:44.883824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71e9cd825480 (0): Success 00:25:40.720 passed 00:25:40.720 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-22 16:02:44.883848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71e9cd825480 (0): Success 00:25:40.980 passed 00:25:40.980 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-07-22 16:02:45.007779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:25:40.980 [2024-07-22 16:02:45.007918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:25:40.980 passed 00:25:40.980 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:25:40.980 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-22 16:02:45.008194] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:40.980 [2024-07-22 16:02:45.008227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:40.980 [2024-07-22 16:02:45.008452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:25:40.980 [2024-07-22 16:02:45.008510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:40.980 [2024-07-22 16:02:45.008608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:25:40.980 [2024-07-22 16:02:45.008686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:40.980 [2024-07-22 16:02:45.008816] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513000001540 with addr=192.168.1.78, port=23 00:25:40.980 passed 00:25:40.980 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-22 16:02:45.008891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:40.980 [2024-07-22 16:02:45.009084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x513000001a80, and the iovcnt=1, remaining_size=1024 00:25:40.980 [2024-07-22 16:02:45.009147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:25:40.980 passed 00:25:40.980 00:25:40.980 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.980 suites 1 1 n/a 0 0 00:25:40.980 tests 27 27 27 0 0 00:25:40.980 asserts 624 624 624 0 n/a 00:25:40.980 00:25:40.980 Elapsed time = 0.129 seconds 00:25:40.980 16:02:45 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:25:40.980 00:25:40.980 00:25:40.980 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.980 http://cunit.sourceforge.net/ 00:25:40.980 00:25:40.980 00:25:40.980 Suite: nvme_transport 00:25:40.980 Test: test_nvme_get_transport ...passed 00:25:40.980 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:25:40.980 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:25:40.980 Test: test_nvme_transport_poll_group_add_remove ...passed 00:25:40.980 Test: test_ctrlr_get_memory_domains ...passed 00:25:40.980 00:25:40.980 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.980 suites 1 1 n/a 0 0 00:25:40.980 tests 5 5 5 0 0 00:25:40.980 asserts 28 28 28 0 n/a 00:25:40.980 00:25:40.980 Elapsed time = 0.000 seconds 00:25:40.980 16:02:45 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:25:40.980 00:25:40.980 00:25:40.980 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.980 http://cunit.sourceforge.net/ 00:25:40.980 00:25:40.980 00:25:40.980 Suite: nvme_io_msg 00:25:40.980 Test: test_nvme_io_msg_send ...passed 00:25:40.980 Test: test_nvme_io_msg_process ...passed 00:25:40.980 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:25:40.980 00:25:40.980 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.980 suites 1 1 n/a 0 0 00:25:40.980 tests 3 3 3 0 0 00:25:40.980 asserts 56 56 56 0 n/a 00:25:40.980 00:25:40.980 Elapsed time = 0.000 seconds 00:25:40.980 16:02:45 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:25:40.980 00:25:40.980 00:25:40.980 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.980 http://cunit.sourceforge.net/ 00:25:40.980 00:25:40.980 00:25:40.980 Suite: nvme_pcie_common 00:25:40.980 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-22 16:02:45.121885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:25:40.980 passed 00:25:40.980 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:25:40.980 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:25:40.980 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-22 16:02:45.124205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:25:40.980 [2024-07-22 16:02:45.124319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:25:40.980 passed 00:25:40.980 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-22 16:02:45.124937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:25:40.980 passed 00:25:40.980 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-22 16:02:45.126145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:40.980 [2024-07-22 16:02:45.126561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:40.981 passed 00:25:40.981 00:25:40.981 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.981 suites 1 1 n/a 0 0 00:25:40.981 tests 6 6 6 0 0 00:25:40.981 asserts 148 148 148 0 n/a 00:25:40.981 00:25:40.981 Elapsed time = 0.005 seconds 00:25:40.981 16:02:45 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:25:40.981 00:25:40.981 00:25:40.981 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.981 http://cunit.sourceforge.net/ 00:25:40.981 00:25:40.981 00:25:40.981 Suite: nvme_fabric 00:25:40.981 Test: test_nvme_fabric_prop_set_cmd ...passed 00:25:40.981 Test: test_nvme_fabric_prop_get_cmd ...passed 00:25:40.981 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:25:40.981 Test: test_nvme_fabric_discover_probe ...passed 00:25:40.981 Test: test_nvme_fabric_qpair_connect ...passed 00:25:40.981 00:25:40.981 [2024-07-22 16:02:45.161203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:25:40.981 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.981 suites 1 1 n/a 0 0 00:25:40.981 tests 5 5 5 0 0 00:25:40.981 asserts 60 60 60 0 n/a 00:25:40.981 00:25:40.981 Elapsed time = 0.001 seconds 00:25:40.981 16:02:45 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:25:40.981 00:25:40.981 00:25:40.981 CUnit - A unit testing framework for C - Version 2.1-3 00:25:40.981 http://cunit.sourceforge.net/ 00:25:40.981 00:25:40.981 00:25:40.981 Suite: nvme_opal 00:25:40.981 Test: test_opal_nvme_security_recv_send_done ...passed 00:25:40.981 Test: test_opal_add_short_atom_header ...[2024-07-22 16:02:45.191228] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:25:40.981 passed 00:25:40.981 00:25:40.981 Run Summary: Type Total Ran Passed Failed Inactive 00:25:40.981 suites 1 1 n/a 0 0 00:25:40.981 tests 2 2 2 0 0 00:25:40.981 asserts 22 22 22 0 n/a 00:25:40.981 00:25:40.981 Elapsed time = 0.000 seconds 00:25:40.981 00:25:40.981 real 0m1.234s 00:25:40.981 user 0m0.593s 00:25:40.981 sys 0m0.498s 00:25:40.981 16:02:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.981 16:02:45 -- common/autotest_common.sh@10 -- # set +x 00:25:40.981 ************************************ 00:25:40.981 END TEST unittest_nvme 00:25:40.981 ************************************ 00:25:40.981 16:02:45 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:25:40.981 16:02:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.981 16:02:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.981 16:02:45 -- common/autotest_common.sh@10 -- # set +x 00:25:41.240 ************************************ 00:25:41.240 START TEST unittest_log 00:25:41.240 ************************************ 00:25:41.240 16:02:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:25:41.240 00:25:41.240 00:25:41.240 CUnit - A unit testing framework for C - Version 2.1-3 00:25:41.240 http://cunit.sourceforge.net/ 00:25:41.240 00:25:41.240 00:25:41.240 Suite: log 00:25:41.240 Test: log_test ...[2024-07-22 16:02:45.276098] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:25:41.240 [2024-07-22 16:02:45.276374] log_ut.c: 55:log_test: *DEBUG*: log test 00:25:41.240 log dump test: 00:25:41.240 passed 00:25:41.240 Test: deprecation ...00000000 6c 6f 67 20 64 75 6d 70 log dump 00:25:41.240 spdk dump test: 00:25:41.240 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:25:41.240 spdk dump test: 00:25:41.240 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:25:41.240 00000010 65 20 63 68 61 72 73 e chars 00:25:42.177 passed 00:25:42.177 00:25:42.177 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.177 suites 1 1 n/a 0 0 00:25:42.177 tests 2 2 2 0 0 00:25:42.177 asserts 73 73 73 0 n/a 00:25:42.177 00:25:42.177 Elapsed time = 0.001 seconds 00:25:42.177 00:25:42.177 real 0m1.034s 00:25:42.177 user 0m0.014s 00:25:42.177 sys 0m0.020s 00:25:42.177 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.177 ************************************ 00:25:42.177 END TEST unittest_log 00:25:42.177 ************************************ 00:25:42.177 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.177 16:02:46 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:25:42.177 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.177 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.177 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.177 ************************************ 00:25:42.177 START TEST unittest_lvol 00:25:42.177 ************************************ 00:25:42.177 16:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:25:42.177 00:25:42.177 00:25:42.177 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.177 http://cunit.sourceforge.net/ 00:25:42.177 00:25:42.177 00:25:42.177 Suite: lvol 00:25:42.177 Test: lvs_init_unload_success ...[2024-07-22 16:02:46.370574] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:25:42.177 passed 00:25:42.177 Test: lvs_init_destroy_success ...[2024-07-22 16:02:46.371132] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:25:42.177 passed 00:25:42.177 Test: lvs_init_opts_success ...passed 00:25:42.177 Test: lvs_unload_lvs_is_null_fail ...passed 00:25:42.177 Test: lvs_names ...[2024-07-22 16:02:46.371380] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:25:42.177 [2024-07-22 16:02:46.371455] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:25:42.177 [2024-07-22 16:02:46.371497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:25:42.177 [2024-07-22 16:02:46.371680] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_create_destroy_success ...passed 00:25:42.177 Test: lvol_create_fail ...[2024-07-22 16:02:46.372285] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:25:42.177 passed 00:25:42.177 Test: lvol_destroy_fail ...[2024-07-22 16:02:46.372387] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:25:42.177 [2024-07-22 16:02:46.372801] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:25:42.177 passed 00:25:42.177 Test: lvol_close ...[2024-07-22 16:02:46.373026] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:25:42.177 passed 00:25:42.177 Test: lvol_resize ...[2024-07-22 16:02:46.373097] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:25:42.177 passed 00:25:42.177 Test: lvol_set_read_only ...passed 00:25:42.177 Test: test_lvs_load ...[2024-07-22 16:02:46.373794] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:25:42.177 [2024-07-22 16:02:46.373845] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:25:42.177 passed 00:25:42.177 Test: lvols_load ...[2024-07-22 16:02:46.374073] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:25:42.177 [2024-07-22 16:02:46.374194] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:25:42.177 passed 00:25:42.177 Test: lvol_open ...passed 00:25:42.177 Test: lvol_snapshot ...passed 00:25:42.177 Test: lvol_snapshot_fail ...[2024-07-22 16:02:46.374910] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_clone ...passed 00:25:42.177 Test: lvol_clone_fail ...[2024-07-22 16:02:46.375415] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_iter_clones ...passed 00:25:42.177 Test: lvol_refcnt ...[2024-07-22 16:02:46.375894] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 0c988dea-2712-48a3-a7d1-76f0780e4624 because it is still open 00:25:42.177 passed 00:25:42.177 Test: lvol_names ...[2024-07-22 16:02:46.376044] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:25:42.177 [2024-07-22 16:02:46.376107] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:25:42.177 [2024-07-22 16:02:46.376296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:25:42.177 passed 00:25:42.177 Test: lvol_create_thin_provisioned ...passed 00:25:42.177 Test: lvol_rename ...[2024-07-22 16:02:46.376689] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:25:42.177 passed 00:25:42.177 Test: lvs_rename ...[2024-07-22 16:02:46.376789] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:25:42.177 passed 00:25:42.177 Test: lvol_inflate ...[2024-07-22 16:02:46.377000] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:25:42.177 [2024-07-22 16:02:46.377190] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:25:42.177 passed 00:25:42.177 Test: lvol_decouple_parent ...passed 00:25:42.177 Test: lvol_get_xattr ...[2024-07-22 16:02:46.377405] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:25:42.177 passed 00:25:42.177 Test: lvol_esnap_reload ...passed 00:25:42.177 Test: lvol_esnap_create_bad_args ...[2024-07-22 16:02:46.377794] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:25:42.177 [2024-07-22 16:02:46.377828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:25:42.177 [2024-07-22 16:02:46.377868] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:25:42.177 [2024-07-22 16:02:46.377940] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_esnap_create_delete ...[2024-07-22 16:02:46.378098] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_esnap_load_esnaps ...passed 00:25:42.177 Test: lvol_esnap_missing ...[2024-07-22 16:02:46.378334] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:25:42.177 [2024-07-22 16:02:46.378543] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:25:42.177 [2024-07-22 16:02:46.378587] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:25:42.177 passed 00:25:42.177 Test: lvol_esnap_hotplug ... 00:25:42.177 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:25:42.177 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:25:42.177 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:25:42.177 [2024-07-22 16:02:46.379220] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 0e75a3e6-d0de-4f82-af8b-910ebfdacc41: failed to create esnap bs_dev: error -12 00:25:42.177 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:25:42.177 [2024-07-22 16:02:46.379431] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 504b7d1e-3a64-471e-bdcb-419cc69ee5cd: failed to create esnap bs_dev: error -12 00:25:42.178 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:25:42.178 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:25:42.178 [2024-07-22 16:02:46.379554] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d2dd7a41-e081-4151-aa41-958e4539e544: failed to create esnap bs_dev: error -12 00:25:42.178 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:25:42.178 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:25:42.178 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:25:42.178 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:25:42.178 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:25:42.178 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:25:42.178 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:25:42.178 passed 00:25:42.178 Test: lvol_get_by ...passed 00:25:42.178 00:25:42.178 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.178 suites 1 1 n/a 0 0 00:25:42.178 tests 34 34 34 0 0 00:25:42.178 asserts 1439 1439 1439 0 n/a 00:25:42.178 00:25:42.178 Elapsed time = 0.010 seconds 00:25:42.178 00:25:42.178 real 0m0.053s 00:25:42.178 user 0m0.032s 00:25:42.178 sys 0m0.022s 00:25:42.178 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.178 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.178 ************************************ 00:25:42.178 END TEST unittest_lvol 00:25:42.178 ************************************ 00:25:42.178 16:02:46 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:42.178 16:02:46 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:25:42.178 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.178 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.178 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.437 ************************************ 00:25:42.437 START TEST unittest_nvme_rdma 00:25:42.437 ************************************ 00:25:42.437 16:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:25:42.437 00:25:42.437 00:25:42.437 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.437 http://cunit.sourceforge.net/ 00:25:42.437 00:25:42.437 00:25:42.437 Suite: nvme_rdma 00:25:42.437 Test: test_nvme_rdma_build_sgl_request ...[2024-07-22 16:02:46.474257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-22 16:02:46.474490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:25:42.437 [2024-07-22 16:02:46.474534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_build_contig_request ...passed 00:25:42.437 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:25:42.437 Test: test_nvme_rdma_create_reqs ...[2024-07-22 16:02:46.474643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:25:42.437 [2024-07-22 16:02:46.474759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_create_rsps ...[2024-07-22 16:02:46.475173] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-22 16:02:46.475392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_poller_create ...[2024-07-22 16:02:46.475422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:25:42.437 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-22 16:02:46.475604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_req_put_and_get ...passed 00:25:42.437 Test: test_nvme_rdma_req_init ...passed 00:25:42.437 Test: test_nvme_rdma_validate_cm_event ...[2024-07-22 16:02:46.475940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_qpair_init ...[2024-07-22 16:02:46.475981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_qpair_submit_request ...passed 00:25:42.437 Test: test_nvme_rdma_memory_domain ...passed 00:25:42.437 Test: test_rdma_ctrlr_get_memory_domains ...[2024-07-22 16:02:46.476218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:25:42.437 passed 00:25:42.437 Test: test_rdma_get_memory_translation ...passed 00:25:42.437 Test: test_get_rdma_qpair_from_wc ...passed 00:25:42.437 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:25:42.437 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-22 16:02:46.476307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:25:42.437 [2024-07-22 16:02:46.476333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:25:42.437 passed 00:25:42.437 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-22 16:02:46.476458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:42.437 [2024-07-22 16:02:46.476499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:25:42.437 [2024-07-22 16:02:46.476691] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:25:42.437 [2024-07-22 16:02:46.476766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:25:42.437 [2024-07-22 16:02:46.476793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x74e0de00a030 on poll group 0x50b000000040 00:25:42.437 [2024-07-22 16:02:46.476837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:25:42.437 [2024-07-22 16:02:46.476876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:25:42.437 [2024-07-22 16:02:46.476904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x74e0de00a030 on poll group 0x50b000000040 00:25:42.437 [2024-07-22 16:02:46.476970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:25:42.437 passed 00:25:42.437 00:25:42.437 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.437 suites 1 1 n/a 0 0 00:25:42.437 tests 22 22 22 0 0 00:25:42.437 asserts 412 412 412 0 n/a 00:25:42.437 00:25:42.437 Elapsed time = 0.003 seconds 00:25:42.437 00:25:42.437 real 0m0.033s 00:25:42.437 user 0m0.014s 00:25:42.437 sys 0m0.020s 00:25:42.437 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.437 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.437 ************************************ 00:25:42.437 END TEST unittest_nvme_rdma 00:25:42.437 ************************************ 00:25:42.437 16:02:46 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:25:42.437 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.437 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.437 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.437 ************************************ 00:25:42.437 START TEST unittest_nvmf_transport 00:25:42.437 ************************************ 00:25:42.437 16:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:25:42.437 00:25:42.437 00:25:42.437 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.437 http://cunit.sourceforge.net/ 00:25:42.437 00:25:42.437 00:25:42.437 Suite: nvmf 00:25:42.437 Test: test_spdk_nvmf_transport_create ...[2024-07-22 16:02:46.561143] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:25:42.437 [2024-07-22 16:02:46.561394] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:25:42.437 [2024-07-22 16:02:46.561462] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:25:42.437 passed 00:25:42.437 Test: test_nvmf_transport_poll_group_create ...[2024-07-22 16:02:46.561541] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:25:42.437 passed 00:25:42.437 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-22 16:02:46.561843] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:25:42.437 [2024-07-22 16:02:46.561880] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:25:42.437 passed 00:25:42.437 Test: test_spdk_nvmf_transport_listen_ext ...[2024-07-22 16:02:46.561935] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:25:42.437 passed 00:25:42.437 00:25:42.437 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.437 suites 1 1 n/a 0 0 00:25:42.437 tests 4 4 4 0 0 00:25:42.437 asserts 49 49 49 0 n/a 00:25:42.437 00:25:42.437 Elapsed time = 0.001 seconds 00:25:42.437 00:25:42.437 real 0m0.043s 00:25:42.437 user 0m0.019s 00:25:42.437 sys 0m0.025s 00:25:42.437 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.437 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.437 ************************************ 00:25:42.437 END TEST unittest_nvmf_transport 00:25:42.437 ************************************ 00:25:42.437 16:02:46 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:25:42.437 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.438 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.438 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.438 ************************************ 00:25:42.438 START TEST unittest_rdma 00:25:42.438 ************************************ 00:25:42.438 16:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:25:42.438 00:25:42.438 00:25:42.438 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.438 http://cunit.sourceforge.net/ 00:25:42.438 00:25:42.438 00:25:42.438 Suite: rdma_common 00:25:42.438 Test: test_spdk_rdma_pd ...passed 00:25:42.438 00:25:42.438 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.438 suites 1 1 n/a 0 0 00:25:42.438 tests 1 1 1 0 0 00:25:42.438 asserts 31 31 31 0 n/a 00:25:42.438 00:25:42.438 Elapsed time = 0.001 seconds 00:25:42.438 [2024-07-22 16:02:46.647333] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:25:42.438 [2024-07-22 16:02:46.647615] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:25:42.438 00:25:42.438 real 0m0.027s 00:25:42.438 user 0m0.016s 00:25:42.438 sys 0m0.012s 00:25:42.438 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.438 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.438 ************************************ 00:25:42.438 END TEST unittest_rdma 00:25:42.438 ************************************ 00:25:42.438 16:02:46 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:42.438 16:02:46 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:25:42.438 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.438 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.698 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 ************************************ 00:25:42.698 START TEST unittest_nvme_cuse 00:25:42.698 ************************************ 00:25:42.698 16:02:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:25:42.698 00:25:42.698 00:25:42.698 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.698 http://cunit.sourceforge.net/ 00:25:42.698 00:25:42.698 00:25:42.698 Suite: nvme_cuse 00:25:42.698 Test: test_cuse_nvme_submit_io_read_write ...passed 00:25:42.698 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:25:42.698 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:25:42.698 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:25:42.698 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:25:42.698 Test: test_cuse_nvme_submit_io ...[2024-07-22 16:02:46.736456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:25:42.698 passed 00:25:42.698 Test: test_cuse_nvme_reset ...passed 00:25:42.698 Test: test_nvme_cuse_stop ...passed 00:25:42.698 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:25:42.698 00:25:42.698 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.698 suites 1 1 n/a 0 0 00:25:42.698 tests 9 9 9 0 0 00:25:42.698 asserts 121 121 121 0 n/a 00:25:42.698 00:25:42.698 Elapsed time = 0.002 seconds 00:25:42.698 [2024-07-22 16:02:46.736738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:25:42.698 00:25:42.698 real 0m0.033s 00:25:42.698 user 0m0.015s 00:25:42.698 sys 0m0.018s 00:25:42.698 16:02:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.698 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 ************************************ 00:25:42.698 END TEST unittest_nvme_cuse 00:25:42.698 ************************************ 00:25:42.698 16:02:46 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:25:42.698 16:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:42.698 16:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.698 16:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.698 ************************************ 00:25:42.698 START TEST unittest_nvmf 00:25:42.698 ************************************ 00:25:42.698 16:02:46 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:25:42.698 16:02:46 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:25:42.698 00:25:42.698 00:25:42.698 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.698 http://cunit.sourceforge.net/ 00:25:42.698 00:25:42.698 00:25:42.698 Suite: nvmf 00:25:42.698 Test: test_get_log_page ...[2024-07-22 16:02:46.827208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:25:42.698 passed 00:25:42.698 Test: test_process_fabrics_cmd ...passed 00:25:42.698 Test: test_connect ...[2024-07-22 16:02:46.828663] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:25:42.698 [2024-07-22 16:02:46.828736] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:25:42.698 [2024-07-22 16:02:46.828802] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:25:42.698 [2024-07-22 16:02:46.828841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:25:42.698 [2024-07-22 16:02:46.828883] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:25:42.698 [2024-07-22 16:02:46.828933] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:25:42.698 [2024-07-22 16:02:46.828970] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:25:42.698 [2024-07-22 16:02:46.829013] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:25:42.698 [2024-07-22 16:02:46.829119] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:25:42.698 [2024-07-22 16:02:46.829210] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:25:42.698 [2024-07-22 16:02:46.829524] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:25:42.698 [2024-07-22 16:02:46.829602] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:25:42.698 [2024-07-22 16:02:46.829687] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:25:42.698 [2024-07-22 16:02:46.829764] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:25:42.698 [2024-07-22 16:02:46.829874] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:25:42.698 passed 00:25:42.698 Test: test_get_ns_id_desc_list ...[2024-07-22 16:02:46.830074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:25:42.698 passed 00:25:42.698 Test: test_identify_ns ...[2024-07-22 16:02:46.830439] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:42.698 [2024-07-22 16:02:46.830657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:25:42.698 [2024-07-22 16:02:46.830773] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:25:42.698 passed 00:25:42.698 Test: test_identify_ns_iocs_specific ...[2024-07-22 16:02:46.830919] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:42.698 [2024-07-22 16:02:46.831241] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:25:42.698 passed 00:25:42.698 Test: test_reservation_write_exclusive ...passed 00:25:42.698 Test: test_reservation_exclusive_access ...passed 00:25:42.698 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:25:42.698 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:25:42.698 Test: test_reservation_notification_log_page ...passed 00:25:42.698 Test: test_get_dif_ctx ...passed 00:25:42.698 Test: test_set_get_features ...passed[2024-07-22 16:02:46.831786] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:25:42.698 [2024-07-22 16:02:46.831841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:25:42.698 [2024-07-22 16:02:46.831877] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:25:42.698 [2024-07-22 16:02:46.831912] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:25:42.698 00:25:42.698 Test: test_identify_ctrlr ...passed 00:25:42.698 Test: test_identify_ctrlr_iocs_specific ...passed 00:25:42.698 Test: test_custom_admin_cmd ...passed 00:25:42.698 Test: test_fused_compare_and_write ...[2024-07-22 16:02:46.832495] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:25:42.698 [2024-07-22 16:02:46.832568] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:25:42.698 [2024-07-22 16:02:46.832610] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:25:42.698 passed 00:25:42.698 Test: test_multi_async_event_reqs ...passed 00:25:42.698 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:25:42.698 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:25:42.698 Test: test_multi_async_events ...passed 00:25:42.698 Test: test_rae ...passed 00:25:42.698 Test: test_nvmf_ctrlr_create_destruct ...passed 00:25:42.698 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:25:42.698 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:25:42.698 Test: test_zcopy_read ...passed 00:25:42.698 Test: test_zcopy_write ...passed 00:25:42.698 Test: test_nvmf_property_set ...passed 00:25:42.698 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-22 16:02:46.833287] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:25:42.698 passed 00:25:42.698 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-22 16:02:46.833494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:25:42.698 [2024-07-22 16:02:46.833550] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:25:42.698 [2024-07-22 16:02:46.833599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:25:42.698 [2024-07-22 16:02:46.833631] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:25:42.698 [2024-07-22 16:02:46.833675] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:25:42.698 passed 00:25:42.698 00:25:42.698 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.698 suites 1 1 n/a 0 0 00:25:42.698 tests 30 30 30 0 0 00:25:42.698 asserts 885 885 885 0 n/a 00:25:42.698 00:25:42.698 Elapsed time = 0.006 seconds 00:25:42.698 16:02:46 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:25:42.698 00:25:42.698 00:25:42.698 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.698 http://cunit.sourceforge.net/ 00:25:42.698 00:25:42.698 00:25:42.698 Suite: nvmf 00:25:42.698 Test: test_get_rw_params ...passed 00:25:42.698 Test: test_lba_in_range ...passed 00:25:42.698 Test: test_get_dif_ctx ...passed 00:25:42.698 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:25:42.699 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-22 16:02:46.868168] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:25:42.699 passed 00:25:42.699 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:25:42.699 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-22 16:02:46.868423] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:25:42.699 [2024-07-22 16:02:46.868486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:25:42.699 [2024-07-22 16:02:46.868558] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:25:42.699 [2024-07-22 16:02:46.868599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:25:42.699 [2024-07-22 16:02:46.868653] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:25:42.699 passed 00:25:42.699 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:25:42.699 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-07-22 16:02:46.868696] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:25:42.699 [2024-07-22 16:02:46.868735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:25:42.699 [2024-07-22 16:02:46.868779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:25:42.699 passed 00:25:42.699 00:25:42.699 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.699 suites 1 1 n/a 0 0 00:25:42.699 tests 9 9 9 0 0 00:25:42.699 asserts 157 157 157 0 n/a 00:25:42.699 00:25:42.699 Elapsed time = 0.001 seconds 00:25:42.699 16:02:46 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:25:42.699 00:25:42.699 00:25:42.699 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.699 http://cunit.sourceforge.net/ 00:25:42.699 00:25:42.699 00:25:42.699 Suite: nvmf 00:25:42.699 Test: test_discovery_log ...passed 00:25:42.699 Test: test_discovery_log_with_filters ...passed 00:25:42.699 00:25:42.699 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.699 suites 1 1 n/a 0 0 00:25:42.699 tests 2 2 2 0 0 00:25:42.699 asserts 238 238 238 0 n/a 00:25:42.699 00:25:42.699 Elapsed time = 0.003 seconds 00:25:42.699 16:02:46 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:25:42.699 00:25:42.699 00:25:42.699 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.699 http://cunit.sourceforge.net/ 00:25:42.699 00:25:42.699 00:25:42.699 Suite: nvmf 00:25:42.699 Test: nvmf_test_create_subsystem ...[2024-07-22 16:02:46.933675] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:25:42.699 [2024-07-22 16:02:46.933974] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:25:42.699 [2024-07-22 16:02:46.934033] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:25:42.699 [2024-07-22 16:02:46.934056] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:25:42.699 [2024-07-22 16:02:46.934087] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:25:42.699 [2024-07-22 16:02:46.934113] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:25:42.699 [2024-07-22 16:02:46.934220] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:25:42.699 [2024-07-22 16:02:46.934319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:25:42.699 [2024-07-22 16:02:46.934403] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:25:42.699 [2024-07-22 16:02:46.934427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:25:42.699 [2024-07-22 16:02:46.934466] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:25:42.699 passed 00:25:42.699 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:25:42.699 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:25:42.699 Test: test_reservation_register ...[2024-07-22 16:02:46.934718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:25:42.699 [2024-07-22 16:02:46.934758] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:25:42.699 [2024-07-22 16:02:46.934952] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_reservation_register_with_ptpl ...[2024-07-22 16:02:46.935074] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:25:42.699 passed 00:25:42.699 Test: test_reservation_acquire_preempt_1 ...[2024-07-22 16:02:46.936164] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_reservation_acquire_release_with_ptpl ...passed 00:25:42.699 Test: test_reservation_release ...passed 00:25:42.699 Test: test_reservation_unregister_notification ...passed 00:25:42.699 Test: test_reservation_release_notification ...[2024-07-22 16:02:46.937957] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 [2024-07-22 16:02:46.938134] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_reservation_release_notification_write_exclusive ...[2024-07-22 16:02:46.938309] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 [2024-07-22 16:02:46.938491] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_reservation_clear_notification ...[2024-07-22 16:02:46.938662] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_reservation_preempt_notification ...passed 00:25:42.699 Test: test_spdk_nvmf_ns_event ...[2024-07-22 16:02:46.938830] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:25:42.699 passed 00:25:42.699 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:25:42.699 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:25:42.699 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:25:42.699 Test: test_nvmf_ns_reservation_report ...[2024-07-22 16:02:46.939571] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:25:42.699 [2024-07-22 16:02:46.939633] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:25:42.699 [2024-07-22 16:02:46.939746] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:25:42.699 passed 00:25:42.699 Test: test_nvmf_nqn_is_valid ...passed 00:25:42.699 Test: test_nvmf_ns_reservation_restore ...[2024-07-22 16:02:46.939795] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:25:42.699 [2024-07-22 16:02:46.939822] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:e0ab91f1-f0f2-44dc-b32c-19f14561c20": uuid is not the correct length 00:25:42.699 [2024-07-22 16:02:46.939835] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:25:42.699 [2024-07-22 16:02:46.939937] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:25:42.699 passed 00:25:42.699 Test: test_nvmf_subsystem_state_change ...passed 00:25:42.699 Test: test_nvmf_reservation_custom_ops ...passed 00:25:42.699 00:25:42.699 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.699 suites 1 1 n/a 0 0 00:25:42.699 tests 22 22 22 0 0 00:25:42.699 asserts 407 407 407 0 n/a 00:25:42.699 00:25:42.699 Elapsed time = 0.007 seconds 00:25:42.699 16:02:46 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:25:42.958 00:25:42.958 00:25:42.958 CUnit - A unit testing framework for C - Version 2.1-3 00:25:42.958 http://cunit.sourceforge.net/ 00:25:42.958 00:25:42.958 00:25:42.958 Suite: nvmf 00:25:42.959 Test: test_nvmf_tcp_create ...[2024-07-22 16:02:47.016244] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_destroy ...passed 00:25:42.959 Test: test_nvmf_tcp_poll_group_create ...passed 00:25:42.959 Test: test_nvmf_tcp_send_c2h_data ...passed 00:25:42.959 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:25:42.959 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:25:42.959 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:25:42.959 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:25:42.959 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:25:42.959 Test: test_nvmf_tcp_icreq_handle ...[2024-07-22 16:02:47.145255] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145352] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0b020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145389] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0b020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145431] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145457] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0b020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145533] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:25:42.959 [2024-07-22 16:02:47.145571] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0d180 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:25:42.959 [2024-07-22 16:02:47.145662] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0d180 is same with the state(5) to be set 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_check_xfer_type ...passed 00:25:42.959 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-22 16:02:47.145687] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145733] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0d180 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145765] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145800] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b0d180 is same with the state(5) to be set 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-22 16:02:47.145874] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:25:42.959 [2024-07-22 16:02:47.145913] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.145934] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61b116a0 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.145980] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x75df61a0c8c0 00:25:42.959 [2024-07-22 16:02:47.146030] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146054] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146089] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x75df61a0c020 00:25:42.959 [2024-07-22 16:02:47.146137] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146163] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146202] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:25:42.959 [2024-07-22 16:02:47.146234] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146272] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146301] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:25:42.959 [2024-07-22 16:02:47.146332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146358] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146410] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146442] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146505] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146541] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146573] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146636] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-22 16:02:47.146663] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146683] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 [2024-07-22 16:02:47.146727] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:25:42.959 [2024-07-22 16:02:47.146750] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75df61a0c020 is same with the state(5) to be set 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-22 16:02:47.175327] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-22 16:02:47.175374] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:25:42.959 [2024-07-22 16:02:47.176090] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:25:42.959 passed 00:25:42.959 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-22 16:02:47.176141] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:25:42.959 passed 00:25:42.959 00:25:42.959 Run Summary: Type Total Ran Passed Failed Inactive 00:25:42.959 suites 1 1 n/a 0 0 00:25:42.959 tests 17 17 17 0 0 00:25:42.959 asserts 222 222 222 0 n/a 00:25:42.959 00:25:42.959 Elapsed time = 0.192 seconds 00:25:42.959 [2024-07-22 16:02:47.176659] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:25:42.959 [2024-07-22 16:02:47.176690] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:25:43.218 16:02:47 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:25:43.218 00:25:43.218 00:25:43.218 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.218 http://cunit.sourceforge.net/ 00:25:43.218 00:25:43.218 00:25:43.218 Suite: nvmf 00:25:43.218 Test: test_nvmf_tgt_create_poll_group ...passed 00:25:43.218 00:25:43.218 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.218 suites 1 1 n/a 0 0 00:25:43.218 tests 1 1 1 0 0 00:25:43.218 asserts 17 17 17 0 n/a 00:25:43.218 00:25:43.218 Elapsed time = 0.030 seconds 00:25:43.218 00:25:43.218 real 0m0.543s 00:25:43.218 user 0m0.214s 00:25:43.218 sys 0m0.327s 00:25:43.218 16:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.218 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 ************************************ 00:25:43.218 END TEST unittest_nvmf 00:25:43.218 ************************************ 00:25:43.218 16:02:47 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:43.218 16:02:47 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:43.218 16:02:47 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:25:43.218 16:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.218 16:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.218 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.218 ************************************ 00:25:43.218 START TEST unittest_nvmf_rdma 00:25:43.218 ************************************ 00:25:43.218 16:02:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:25:43.218 00:25:43.218 00:25:43.218 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.218 http://cunit.sourceforge.net/ 00:25:43.218 00:25:43.218 00:25:43.218 Suite: nvmf 00:25:43.218 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-22 16:02:47.430543] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:25:43.218 [2024-07-22 16:02:47.430810] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:25:43.218 [2024-07-22 16:02:47.430861] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:25:43.218 passed 00:25:43.218 Test: test_spdk_nvmf_rdma_request_process ...passed 00:25:43.218 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:25:43.218 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:25:43.218 Test: test_nvmf_rdma_opts_init ...passed 00:25:43.218 Test: test_nvmf_rdma_request_free_data ...passed 00:25:43.218 Test: test_nvmf_rdma_update_ibv_state ...passed 00:25:43.218 Test: test_nvmf_rdma_resources_create ...[2024-07-22 16:02:47.432489] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:25:43.218 [2024-07-22 16:02:47.432546] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:25:43.218 passed 00:25:43.218 Test: test_nvmf_rdma_qpair_compare ...passed 00:25:43.218 Test: test_nvmf_rdma_resize_cq ...[2024-07-22 16:02:47.434340] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:25:43.218 Using CQ of insufficient size may lead to CQ overrun 00:25:43.218 [2024-07-22 16:02:47.434413] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:25:43.218 passed 00:25:43.218 00:25:43.218 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.218 suites 1 1 n/a 0 0 00:25:43.218 tests 10 10 10 0 0 00:25:43.218 asserts 584 584 584 0 n/a 00:25:43.218 00:25:43.218 Elapsed time = 0.004 seconds 00:25:43.218 [2024-07-22 16:02:47.434477] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:25:43.218 00:25:43.218 real 0m0.043s 00:25:43.218 user 0m0.018s 00:25:43.218 sys 0m0.026s 00:25:43.218 16:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.218 ************************************ 00:25:43.218 END TEST unittest_nvmf_rdma 00:25:43.218 ************************************ 00:25:43.218 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.477 16:02:47 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:43.477 16:02:47 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:25:43.477 16:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.477 16:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.477 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.477 ************************************ 00:25:43.477 START TEST unittest_scsi 00:25:43.477 ************************************ 00:25:43.477 16:02:47 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:25:43.477 16:02:47 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:25:43.477 00:25:43.477 00:25:43.477 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.478 http://cunit.sourceforge.net/ 00:25:43.478 00:25:43.478 00:25:43.478 Suite: dev_suite 00:25:43.478 Test: dev_destruct_null_dev ...passed 00:25:43.478 Test: dev_destruct_zero_luns ...passed 00:25:43.478 Test: dev_destruct_null_lun ...passed 00:25:43.478 Test: dev_destruct_success ...passed 00:25:43.478 Test: dev_construct_num_luns_zero ...[2024-07-22 16:02:47.527240] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:25:43.478 passed 00:25:43.478 Test: dev_construct_no_lun_zero ...passed 00:25:43.478 Test: dev_construct_null_lun ...[2024-07-22 16:02:47.527901] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:25:43.478 passed 00:25:43.478 Test: dev_construct_name_too_long ...passed 00:25:43.478 Test: dev_construct_success ...passed 00:25:43.478 Test: dev_construct_success_lun_zero_not_first ...passed 00:25:43.478 Test: dev_queue_mgmt_task_success ...[2024-07-22 16:02:47.527969] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:25:43.478 [2024-07-22 16:02:47.528062] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:25:43.478 passed 00:25:43.478 Test: dev_queue_task_success ...passed 00:25:43.478 Test: dev_stop_success ...passed 00:25:43.478 Test: dev_add_port_max_ports ...passed 00:25:43.478 Test: dev_add_port_construct_failure1 ...[2024-07-22 16:02:47.528319] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:25:43.478 passed 00:25:43.478 Test: dev_add_port_construct_failure2 ...[2024-07-22 16:02:47.528363] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:25:43.478 [2024-07-22 16:02:47.528400] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:25:43.478 passed 00:25:43.478 Test: dev_add_port_success1 ...passed 00:25:43.478 Test: dev_add_port_success2 ...passed 00:25:43.478 Test: dev_add_port_success3 ...passed 00:25:43.478 Test: dev_find_port_by_id_num_ports_zero ...passed 00:25:43.478 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:25:43.478 Test: dev_find_port_by_id_success ...passed 00:25:43.478 Test: dev_add_lun_bdev_not_found ...passed 00:25:43.478 Test: dev_add_lun_no_free_lun_id ...passed 00:25:43.478 Test: dev_add_lun_success1 ...passed 00:25:43.478 Test: dev_add_lun_success2 ...passed 00:25:43.478 Test: dev_check_pending_tasks ...passed 00:25:43.478 Test: dev_iterate_luns ...passed 00:25:43.478 Test: dev_find_free_lun ...passed 00:25:43.478 00:25:43.478 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.478 suites 1 1 n/a 0 0 00:25:43.478 tests 29 29 29 0 0 00:25:43.478 asserts 97 97 97 0 n/a 00:25:43.478 00:25:43.478 Elapsed time = 0.002 seconds 00:25:43.478 [2024-07-22 16:02:47.528810] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:25:43.478 16:02:47 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:25:43.478 00:25:43.478 00:25:43.478 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.478 http://cunit.sourceforge.net/ 00:25:43.478 00:25:43.478 00:25:43.478 Suite: lun_suite 00:25:43.478 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-22 16:02:47.562463] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:25:43.478 passed 00:25:43.478 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:25:43.478 Test: lun_task_mgmt_execute_lun_reset ...passed 00:25:43.478 Test: lun_task_mgmt_execute_target_reset ...[2024-07-22 16:02:47.562745] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:25:43.478 passed 00:25:43.478 Test: lun_task_mgmt_execute_invalid_case ...passed 00:25:43.478 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-22 16:02:47.562881] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:25:43.478 passed 00:25:43.478 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:25:43.478 Test: lun_append_task_null_lun_not_supported ...passed 00:25:43.478 Test: lun_execute_scsi_task_pending ...passed 00:25:43.478 Test: lun_execute_scsi_task_complete ...passed 00:25:43.478 Test: lun_execute_scsi_task_resize ...passed 00:25:43.478 Test: lun_destruct_success ...passed 00:25:43.478 Test: lun_construct_null_ctx ...passed 00:25:43.478 Test: lun_construct_success ...passed 00:25:43.478 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-22 16:02:47.563137] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:25:43.478 passed 00:25:43.478 Test: lun_reset_task_suspend_scsi_task ...passed 00:25:43.478 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:25:43.478 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:25:43.478 00:25:43.478 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.478 suites 1 1 n/a 0 0 00:25:43.478 tests 18 18 18 0 0 00:25:43.478 asserts 153 153 153 0 n/a 00:25:43.478 00:25:43.478 Elapsed time = 0.001 seconds 00:25:43.478 16:02:47 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:25:43.478 00:25:43.478 00:25:43.478 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.478 http://cunit.sourceforge.net/ 00:25:43.478 00:25:43.478 00:25:43.478 Suite: scsi_suite 00:25:43.478 Test: scsi_init ...passed 00:25:43.478 00:25:43.478 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.478 suites 1 1 n/a 0 0 00:25:43.478 tests 1 1 1 0 0 00:25:43.478 asserts 1 1 1 0 n/a 00:25:43.478 00:25:43.478 Elapsed time = 0.000 seconds 00:25:43.478 16:02:47 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:25:43.478 00:25:43.478 00:25:43.478 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.478 http://cunit.sourceforge.net/ 00:25:43.478 00:25:43.478 00:25:43.478 Suite: translation_suite 00:25:43.478 Test: mode_select_6_test ...passed 00:25:43.478 Test: mode_select_6_test2 ...passed 00:25:43.478 Test: mode_sense_6_test ...passed 00:25:43.478 Test: mode_sense_10_test ...passed 00:25:43.478 Test: inquiry_evpd_test ...passed 00:25:43.478 Test: inquiry_standard_test ...passed 00:25:43.478 Test: inquiry_overflow_test ...passed 00:25:43.478 Test: task_complete_test ...passed 00:25:43.478 Test: lba_range_test ...passed 00:25:43.478 Test: xfer_len_test ...[2024-07-22 16:02:47.632433] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:25:43.478 passed 00:25:43.478 Test: xfer_test ...passed 00:25:43.478 Test: scsi_name_padding_test ...passed 00:25:43.478 Test: get_dif_ctx_test ...passed 00:25:43.478 Test: unmap_split_test ...passed 00:25:43.478 00:25:43.478 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.478 suites 1 1 n/a 0 0 00:25:43.478 tests 14 14 14 0 0 00:25:43.478 asserts 1200 1200 1200 0 n/a 00:25:43.478 00:25:43.478 Elapsed time = 0.006 seconds 00:25:43.478 16:02:47 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:25:43.478 00:25:43.478 00:25:43.478 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.478 http://cunit.sourceforge.net/ 00:25:43.478 00:25:43.478 00:25:43.478 Suite: reservation_suite 00:25:43.478 Test: test_reservation_register ...[2024-07-22 16:02:47.660128] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.478 passed 00:25:43.478 Test: test_reservation_reserve ...[2024-07-22 16:02:47.660419] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.478 [2024-07-22 16:02:47.660489] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:25:43.478 passed 00:25:43.478 Test: test_reservation_preempt_non_all_regs ...[2024-07-22 16:02:47.660537] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:25:43.478 [2024-07-22 16:02:47.660596] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.478 passed 00:25:43.478 Test: test_reservation_preempt_all_regs ...[2024-07-22 16:02:47.660654] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:25:43.478 [2024-07-22 16:02:47.660742] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.478 passed 00:25:43.478 Test: test_reservation_cmds_conflict ...[2024-07-22 16:02:47.660852] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.478 [2024-07-22 16:02:47.660924] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:25:43.478 passed 00:25:43.478 Test: test_scsi2_reserve_release ...passed 00:25:43.478 Test: test_pr_with_scsi2_reserve_release ...passed[2024-07-22 16:02:47.660959] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:25:43.478 [2024-07-22 16:02:47.661013] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:25:43.478 [2024-07-22 16:02:47.661045] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:25:43.478 [2024-07-22 16:02:47.661079] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:25:43.479 [2024-07-22 16:02:47.661150] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:25:43.479 00:25:43.479 00:25:43.479 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.479 suites 1 1 n/a 0 0 00:25:43.479 tests 7 7 7 0 0 00:25:43.479 asserts 257 257 257 0 n/a 00:25:43.479 00:25:43.479 Elapsed time = 0.001 seconds 00:25:43.479 ************************************ 00:25:43.479 END TEST unittest_scsi 00:25:43.479 ************************************ 00:25:43.479 00:25:43.479 real 0m0.164s 00:25:43.479 user 0m0.082s 00:25:43.479 sys 0m0.084s 00:25:43.479 16:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.479 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.479 16:02:47 -- unit/unittest.sh@276 -- # uname -s 00:25:43.479 16:02:47 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:25:43.479 16:02:47 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:25:43.479 16:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.479 16:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.479 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.479 ************************************ 00:25:43.479 START TEST unittest_sock 00:25:43.479 ************************************ 00:25:43.479 16:02:47 -- common/autotest_common.sh@1104 -- # unittest_sock 00:25:43.479 16:02:47 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:25:43.479 00:25:43.479 00:25:43.479 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.479 http://cunit.sourceforge.net/ 00:25:43.479 00:25:43.479 00:25:43.479 Suite: sock 00:25:43.737 Test: posix_sock ...passed 00:25:43.737 Test: ut_sock ...passed 00:25:43.737 Test: posix_sock_group ...passed 00:25:43.737 Test: ut_sock_group ...passed 00:25:43.737 Test: posix_sock_group_fairness ...passed 00:25:43.737 Test: _posix_sock_close ...passed 00:25:43.737 Test: sock_get_default_opts ...passed 00:25:43.737 Test: ut_sock_impl_get_set_opts ...passed 00:25:43.737 Test: posix_sock_impl_get_set_opts ...passed 00:25:43.737 Test: ut_sock_map ...passed 00:25:43.737 Test: override_impl_opts ...passed 00:25:43.737 Test: ut_sock_group_get_ctx ...passed 00:25:43.737 00:25:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.737 suites 1 1 n/a 0 0 00:25:43.737 tests 12 12 12 0 0 00:25:43.737 asserts 349 349 349 0 n/a 00:25:43.737 00:25:43.737 Elapsed time = 0.009 seconds 00:25:43.737 16:02:47 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:25:43.737 00:25:43.737 00:25:43.737 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.737 http://cunit.sourceforge.net/ 00:25:43.737 00:25:43.737 00:25:43.737 Suite: posix 00:25:43.737 Test: flush ...passed 00:25:43.737 00:25:43.737 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.737 suites 1 1 n/a 0 0 00:25:43.737 tests 1 1 1 0 0 00:25:43.737 asserts 28 28 28 0 n/a 00:25:43.737 00:25:43.737 Elapsed time = 0.000 seconds 00:25:43.737 16:02:47 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:43.737 ************************************ 00:25:43.737 END TEST unittest_sock 00:25:43.737 ************************************ 00:25:43.737 00:25:43.737 real 0m0.104s 00:25:43.737 user 0m0.034s 00:25:43.737 sys 0m0.046s 00:25:43.737 16:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.738 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.738 16:02:47 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:25:43.738 16:02:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.738 16:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.738 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.738 ************************************ 00:25:43.738 START TEST unittest_thread 00:25:43.738 ************************************ 00:25:43.738 16:02:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:25:43.738 00:25:43.738 00:25:43.738 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.738 http://cunit.sourceforge.net/ 00:25:43.738 00:25:43.738 00:25:43.738 Suite: io_channel 00:25:43.738 Test: thread_alloc ...passed 00:25:43.738 Test: thread_send_msg ...passed 00:25:43.738 Test: thread_poller ...passed 00:25:43.738 Test: poller_pause ...passed 00:25:43.738 Test: thread_for_each ...passed 00:25:43.738 Test: for_each_channel_remove ...passed 00:25:43.738 Test: for_each_channel_unreg ...[2024-07-22 16:02:47.933462] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7aaf8b309640 already registered (old:0x513000000200 new:0x5130000003c0) 00:25:43.738 passed 00:25:43.738 Test: thread_name ...passed 00:25:43.738 Test: channel ...[2024-07-22 16:02:47.939257] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x59eaaee3f120 00:25:43.738 passed 00:25:43.738 Test: channel_destroy_races ...passed 00:25:43.738 Test: thread_exit_test ...[2024-07-22 16:02:47.946532] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x518000005c80 got timeout, and move it to the exited state forcefully 00:25:43.738 passed 00:25:43.738 Test: thread_update_stats_test ...passed 00:25:43.738 Test: nested_channel ...passed 00:25:43.738 Test: device_unregister_and_thread_exit_race ...passed 00:25:43.738 Test: cache_closest_timed_poller ...passed 00:25:43.738 Test: multi_timed_pollers_have_same_expiration ...passed 00:25:43.738 Test: io_device_lookup ...passed 00:25:43.738 Test: spdk_spin ...[2024-07-22 16:02:47.956903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:25:43.738 [2024-07-22 16:02:47.956963] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7aaf8b30a020 00:25:43.738 [2024-07-22 16:02:47.956983] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:25:43.738 [2024-07-22 16:02:47.958610] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:25:43.738 [2024-07-22 16:02:47.958661] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7aaf8b30a020 00:25:43.738 [2024-07-22 16:02:47.958684] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:25:43.738 [2024-07-22 16:02:47.958711] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7aaf8b30a020 00:25:43.738 [2024-07-22 16:02:47.958738] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:25:43.738 [2024-07-22 16:02:47.958757] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7aaf8b30a020 00:25:43.738 [2024-07-22 16:02:47.958780] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:25:43.738 [2024-07-22 16:02:47.958812] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7aaf8b30a020 00:25:43.738 passed 00:25:43.738 Test: for_each_channel_and_thread_exit_race ...passed 00:25:43.738 Test: for_each_thread_and_thread_exit_race ...passed 00:25:43.738 00:25:43.738 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.738 suites 1 1 n/a 0 0 00:25:43.738 tests 20 20 20 0 0 00:25:43.738 asserts 409 409 409 0 n/a 00:25:43.738 00:25:43.738 Elapsed time = 0.063 seconds 00:25:43.738 00:25:43.738 real 0m0.104s 00:25:43.738 user 0m0.070s 00:25:43.738 sys 0m0.034s 00:25:43.738 16:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.738 16:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.738 ************************************ 00:25:43.738 END TEST unittest_thread 00:25:43.738 ************************************ 00:25:43.997 16:02:48 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:25:43.997 16:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.997 16:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.997 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:43.997 ************************************ 00:25:43.997 START TEST unittest_iobuf 00:25:43.997 ************************************ 00:25:43.997 16:02:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:25:43.997 00:25:43.997 00:25:43.997 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.997 http://cunit.sourceforge.net/ 00:25:43.997 00:25:43.997 00:25:43.997 Suite: io_channel 00:25:43.997 Test: iobuf ...passed 00:25:43.997 Test: iobuf_cache ...[2024-07-22 16:02:48.065275] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:25:43.997 [2024-07-22 16:02:48.065496] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:25:43.997 [2024-07-22 16:02:48.065579] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:25:43.997 [2024-07-22 16:02:48.065615] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:25:43.997 [2024-07-22 16:02:48.065679] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:25:43.997 [2024-07-22 16:02:48.065714] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:25:43.997 passed 00:25:43.997 00:25:43.997 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.997 suites 1 1 n/a 0 0 00:25:43.997 tests 2 2 2 0 0 00:25:43.997 asserts 107 107 107 0 n/a 00:25:43.997 00:25:43.997 Elapsed time = 0.006 seconds 00:25:43.997 00:25:43.997 real 0m0.040s 00:25:43.997 user 0m0.019s 00:25:43.997 sys 0m0.021s 00:25:43.997 16:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.997 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:43.997 ************************************ 00:25:43.997 END TEST unittest_iobuf 00:25:43.997 ************************************ 00:25:43.997 16:02:48 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:25:43.997 16:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.997 16:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.997 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:43.997 ************************************ 00:25:43.997 START TEST unittest_util 00:25:43.997 ************************************ 00:25:43.997 16:02:48 -- common/autotest_common.sh@1104 -- # unittest_util 00:25:43.997 16:02:48 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:25:43.997 00:25:43.997 00:25:43.997 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.997 http://cunit.sourceforge.net/ 00:25:43.997 00:25:43.997 00:25:43.997 Suite: base64 00:25:43.997 Test: test_base64_get_encoded_strlen ...passed 00:25:43.997 Test: test_base64_get_decoded_len ...passed 00:25:43.997 Test: test_base64_encode ...passed 00:25:43.997 Test: test_base64_decode ...passed 00:25:43.997 Test: test_base64_urlsafe_encode ...passed 00:25:43.997 Test: test_base64_urlsafe_decode ...passed 00:25:43.997 00:25:43.997 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.997 suites 1 1 n/a 0 0 00:25:43.997 tests 6 6 6 0 0 00:25:43.997 asserts 112 112 112 0 n/a 00:25:43.997 00:25:43.997 Elapsed time = 0.000 seconds 00:25:43.997 16:02:48 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:25:43.997 00:25:43.997 00:25:43.997 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.997 http://cunit.sourceforge.net/ 00:25:43.997 00:25:43.997 00:25:43.997 Suite: bit_array 00:25:43.997 Test: test_1bit ...passed 00:25:43.997 Test: test_64bit ...passed 00:25:43.997 Test: test_find ...passed 00:25:43.997 Test: test_resize ...passed 00:25:43.997 Test: test_errors ...passed 00:25:43.997 Test: test_count ...passed 00:25:43.997 Test: test_mask_store_load ...passed 00:25:43.997 Test: test_mask_clear ...passed 00:25:43.997 00:25:43.997 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.997 suites 1 1 n/a 0 0 00:25:43.997 tests 8 8 8 0 0 00:25:43.997 asserts 5075 5075 5075 0 n/a 00:25:43.997 00:25:43.997 Elapsed time = 0.002 seconds 00:25:43.997 16:02:48 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:25:43.997 00:25:43.997 00:25:43.997 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.997 http://cunit.sourceforge.net/ 00:25:43.997 00:25:43.997 00:25:43.997 Suite: cpuset 00:25:43.997 Test: test_cpuset ...passed 00:25:43.997 Test: test_cpuset_parse ...[2024-07-22 16:02:48.213795] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:25:43.997 [2024-07-22 16:02:48.214024] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:25:43.997 [2024-07-22 16:02:48.214062] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:25:43.997 [2024-07-22 16:02:48.214098] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:25:43.997 [2024-07-22 16:02:48.214126] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:25:43.997 passed 00:25:43.997 Test: test_cpuset_fmt ...[2024-07-22 16:02:48.214158] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:25:43.997 [2024-07-22 16:02:48.214185] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:25:43.998 [2024-07-22 16:02:48.214217] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:25:43.998 passed 00:25:43.998 00:25:43.998 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.998 suites 1 1 n/a 0 0 00:25:43.998 tests 3 3 3 0 0 00:25:43.998 asserts 65 65 65 0 n/a 00:25:43.998 00:25:43.998 Elapsed time = 0.002 seconds 00:25:43.998 16:02:48 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:25:43.998 00:25:43.998 00:25:43.998 CUnit - A unit testing framework for C - Version 2.1-3 00:25:43.998 http://cunit.sourceforge.net/ 00:25:43.998 00:25:43.998 00:25:43.998 Suite: crc16 00:25:43.998 Test: test_crc16_t10dif ...passed 00:25:43.998 Test: test_crc16_t10dif_seed ...passed 00:25:43.998 Test: test_crc16_t10dif_copy ...passed 00:25:43.998 00:25:43.998 Run Summary: Type Total Ran Passed Failed Inactive 00:25:43.998 suites 1 1 n/a 0 0 00:25:43.998 tests 3 3 3 0 0 00:25:43.998 asserts 5 5 5 0 n/a 00:25:43.998 00:25:43.998 Elapsed time = 0.000 seconds 00:25:43.998 16:02:48 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:25:44.258 00:25:44.258 00:25:44.258 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.258 http://cunit.sourceforge.net/ 00:25:44.258 00:25:44.258 00:25:44.258 Suite: crc32_ieee 00:25:44.258 Test: test_crc32_ieee ...passed 00:25:44.258 00:25:44.258 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.258 suites 1 1 n/a 0 0 00:25:44.258 tests 1 1 1 0 0 00:25:44.258 asserts 1 1 1 0 n/a 00:25:44.258 00:25:44.258 Elapsed time = 0.000 seconds 00:25:44.258 16:02:48 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:25:44.258 00:25:44.258 00:25:44.258 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.258 http://cunit.sourceforge.net/ 00:25:44.258 00:25:44.258 00:25:44.258 Suite: crc32c 00:25:44.258 Test: test_crc32c ...passed 00:25:44.258 Test: test_crc32c_nvme ...passed 00:25:44.258 00:25:44.258 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.258 suites 1 1 n/a 0 0 00:25:44.258 tests 2 2 2 0 0 00:25:44.258 asserts 16 16 16 0 n/a 00:25:44.258 00:25:44.258 Elapsed time = 0.001 seconds 00:25:44.258 16:02:48 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:25:44.258 00:25:44.258 00:25:44.258 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.258 http://cunit.sourceforge.net/ 00:25:44.258 00:25:44.258 00:25:44.258 Suite: crc64 00:25:44.258 Test: test_crc64_nvme ...passed 00:25:44.258 00:25:44.258 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.258 suites 1 1 n/a 0 0 00:25:44.258 tests 1 1 1 0 0 00:25:44.258 asserts 4 4 4 0 n/a 00:25:44.258 00:25:44.258 Elapsed time = 0.000 seconds 00:25:44.258 16:02:48 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:25:44.258 00:25:44.258 00:25:44.258 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.258 http://cunit.sourceforge.net/ 00:25:44.258 00:25:44.258 00:25:44.258 Suite: string 00:25:44.258 Test: test_parse_ip_addr ...passed 00:25:44.258 Test: test_str_chomp ...passed 00:25:44.258 Test: test_parse_capacity ...passed 00:25:44.258 Test: test_sprintf_append_realloc ...passed 00:25:44.258 Test: test_strtol ...passed 00:25:44.258 Test: test_strtoll ...passed 00:25:44.258 Test: test_strarray ...passed 00:25:44.258 Test: test_strcpy_replace ...passed 00:25:44.258 00:25:44.258 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.258 suites 1 1 n/a 0 0 00:25:44.258 tests 8 8 8 0 0 00:25:44.258 asserts 161 161 161 0 n/a 00:25:44.258 00:25:44.258 Elapsed time = 0.001 seconds 00:25:44.258 16:02:48 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:25:44.258 00:25:44.258 00:25:44.258 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.258 http://cunit.sourceforge.net/ 00:25:44.258 00:25:44.258 00:25:44.258 Suite: dif 00:25:44.258 Test: dif_generate_and_verify_test ...[2024-07-22 16:02:48.396883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:25:44.258 [2024-07-22 16:02:48.397243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:25:44.258 [2024-07-22 16:02:48.397503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:25:44.258 [2024-07-22 16:02:48.397734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:25:44.258 [2024-07-22 16:02:48.397962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:25:44.258 [2024-07-22 16:02:48.398196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:25:44.258 passed 00:25:44.259 Test: dif_disable_check_test ...[2024-07-22 16:02:48.399061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:25:44.259 [2024-07-22 16:02:48.399286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:25:44.259 [2024-07-22 16:02:48.399525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:25:44.259 passed 00:25:44.259 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-22 16:02:48.400398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:25:44.259 [2024-07-22 16:02:48.400653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:25:44.259 [2024-07-22 16:02:48.400906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:25:44.259 [2024-07-22 16:02:48.401174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:25:44.259 [2024-07-22 16:02:48.401418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:25:44.259 [2024-07-22 16:02:48.401687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:25:44.259 [2024-07-22 16:02:48.401961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:25:44.259 [2024-07-22 16:02:48.402210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:25:44.259 [2024-07-22 16:02:48.402493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:25:44.259 [2024-07-22 16:02:48.402778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:25:44.259 [2024-07-22 16:02:48.403062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:25:44.259 passed 00:25:44.259 Test: dif_apptag_mask_test ...[2024-07-22 16:02:48.403301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:25:44.259 [2024-07-22 16:02:48.403545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:25:44.259 passed 00:25:44.259 Test: dif_sec_512_md_0_error_test ...passed 00:25:44.259 Test: dif_sec_4096_md_0_error_test ...[2024-07-22 16:02:48.403707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:25:44.259 passed 00:25:44.259 Test: dif_sec_4100_md_128_error_test ...[2024-07-22 16:02:48.403737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:25:44.259 [2024-07-22 16:02:48.403776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:25:44.259 [2024-07-22 16:02:48.403811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:25:44.259 passed 00:25:44.259 Test: dif_guard_seed_test ...passed 00:25:44.259 Test: dif_guard_value_test ...[2024-07-22 16:02:48.403841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:25:44.259 passed 00:25:44.259 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:25:44.259 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:25:44.259 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-22 16:02:48.439489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd5c, Actual=fd4c 00:25:44.259 [2024-07-22 16:02:48.441434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe31, Actual=fe21 00:25:44.259 [2024-07-22 16:02:48.443369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.445540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.447480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.259 [2024-07-22 16:02:48.449422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.259 [2024-07-22 16:02:48.451341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=3ac 00:25:44.259 [2024-07-22 16:02:48.452469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=e6e1 00:25:44.259 [2024-07-22 16:02:48.453592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753fd, Actual=1ab753ed 00:25:44.259 [2024-07-22 16:02:48.455529] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574670, Actual=38574660 00:25:44.259 [2024-07-22 16:02:48.457449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.459390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.461304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.259 [2024-07-22 16:02:48.463272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.259 [2024-07-22 16:02:48.465225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=c194175f 00:25:44.259 [2024-07-22 16:02:48.466365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=dab5fbfd 00:25:44.259 [2024-07-22 16:02:48.467537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.259 [2024-07-22 16:02:48.469507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.259 [2024-07-22 16:02:48.471468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.473411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.475347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.259 [2024-07-22 16:02:48.477275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.259 [2024-07-22 16:02:48.479392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.259 [2024-07-22 16:02:48.480552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.259 passed 00:25:44.259 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-22 16:02:48.481002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.259 [2024-07-22 16:02:48.481254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:25:44.259 [2024-07-22 16:02:48.481508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.481755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.482006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.259 [2024-07-22 16:02:48.482264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.259 [2024-07-22 16:02:48.482520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.259 [2024-07-22 16:02:48.482692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e6e1 00:25:44.259 [2024-07-22 16:02:48.482864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.259 [2024-07-22 16:02:48.483111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:25:44.259 [2024-07-22 16:02:48.483348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.483587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.259 [2024-07-22 16:02:48.483825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.259 [2024-07-22 16:02:48.484061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.259 [2024-07-22 16:02:48.484282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.259 [2024-07-22 16:02:48.484501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=dab5fbfd 00:25:44.259 [2024-07-22 16:02:48.484690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.260 [2024-07-22 16:02:48.484925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.260 [2024-07-22 16:02:48.485177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.485407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.485638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.485906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.486138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.260 [2024-07-22 16:02:48.486291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.260 passed 00:25:44.260 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-22 16:02:48.486536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.260 [2024-07-22 16:02:48.486812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:25:44.260 [2024-07-22 16:02:48.487055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.487292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.487539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.487762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.488008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.260 [2024-07-22 16:02:48.488193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e6e1 00:25:44.260 [2024-07-22 16:02:48.488371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.260 [2024-07-22 16:02:48.488618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:25:44.260 [2024-07-22 16:02:48.488844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.489101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.489334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.489582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.489815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.260 [2024-07-22 16:02:48.489997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=dab5fbfd 00:25:44.260 [2024-07-22 16:02:48.490163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.260 [2024-07-22 16:02:48.490418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.260 [2024-07-22 16:02:48.490654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.490914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.491146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.491383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.491611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.260 [2024-07-22 16:02:48.491793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.260 passed 00:25:44.260 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-22 16:02:48.491979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.260 [2024-07-22 16:02:48.492228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:25:44.260 [2024-07-22 16:02:48.492449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.492693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.492928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.493212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.493439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.260 [2024-07-22 16:02:48.493642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e6e1 00:25:44.260 [2024-07-22 16:02:48.493809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.260 [2024-07-22 16:02:48.494050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:25:44.260 [2024-07-22 16:02:48.494311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.494556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.494794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.495052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.495272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.260 [2024-07-22 16:02:48.495484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=dab5fbfd 00:25:44.260 [2024-07-22 16:02:48.495671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.260 [2024-07-22 16:02:48.495910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.260 [2024-07-22 16:02:48.496144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.496386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.496613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.496860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.497098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.260 [2024-07-22 16:02:48.497260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.260 passed 00:25:44.260 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-22 16:02:48.497496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.260 [2024-07-22 16:02:48.497747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:25:44.260 [2024-07-22 16:02:48.497981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.498250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.498485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.498729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.260 [2024-07-22 16:02:48.498959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.260 [2024-07-22 16:02:48.499141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e6e1 00:25:44.260 passed 00:25:44.260 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-22 16:02:48.499350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.260 [2024-07-22 16:02:48.499580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:25:44.260 [2024-07-22 16:02:48.499816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.500063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.260 [2024-07-22 16:02:48.500295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.500521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.260 [2024-07-22 16:02:48.500751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.260 [2024-07-22 16:02:48.500930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=dab5fbfd 00:25:44.260 [2024-07-22 16:02:48.501125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.260 [2024-07-22 16:02:48.501359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.260 [2024-07-22 16:02:48.501584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.501816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.502047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.502294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.502522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.261 [2024-07-22 16:02:48.502696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.261 passed 00:25:44.261 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-22 16:02:48.502890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.261 [2024-07-22 16:02:48.503139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe31, Actual=fe21 00:25:44.261 [2024-07-22 16:02:48.503365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.503599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.503824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.504074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.504304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.261 [2024-07-22 16:02:48.504492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=e6e1 00:25:44.261 passed 00:25:44.261 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-22 16:02:48.504707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.261 [2024-07-22 16:02:48.504940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574670, Actual=38574660 00:25:44.261 [2024-07-22 16:02:48.505181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.505415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.505632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.261 [2024-07-22 16:02:48.505871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.261 [2024-07-22 16:02:48.506117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.261 [2024-07-22 16:02:48.506300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=dab5fbfd 00:25:44.261 [2024-07-22 16:02:48.506510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.261 [2024-07-22 16:02:48.506752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a3d4837a266, Actual=88010a2d4837a266 00:25:44.261 [2024-07-22 16:02:48.506979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.507224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.261 [2024-07-22 16:02:48.507450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.507687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.261 [2024-07-22 16:02:48.507903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.261 [2024-07-22 16:02:48.508085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=8a43314a109dc035 00:25:44.261 passed 00:25:44.261 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:25:44.261 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:25:44.261 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:25:44.261 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:25:44.520 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:25:44.520 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:25:44.520 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:25:44.520 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:25:44.520 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:25:44.520 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-22 16:02:48.543373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd5c, Actual=fd4c 00:25:44.520 [2024-07-22 16:02:48.544285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f56e, Actual=f57e 00:25:44.520 [2024-07-22 16:02:48.545166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.546044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.546921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.520 [2024-07-22 16:02:48.547798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.520 [2024-07-22 16:02:48.548667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=3ac 00:25:44.520 [2024-07-22 16:02:48.549546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=c1cd 00:25:44.520 [2024-07-22 16:02:48.550432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753fd, Actual=1ab753ed 00:25:44.520 [2024-07-22 16:02:48.551329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d2206689, Actual=d2206699 00:25:44.520 [2024-07-22 16:02:48.552206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.553110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.553967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.520 [2024-07-22 16:02:48.554849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.520 [2024-07-22 16:02:48.555745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=c194175f 00:25:44.520 [2024-07-22 16:02:48.556624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=ea72dcb3 00:25:44.520 [2024-07-22 16:02:48.557515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.520 [2024-07-22 16:02:48.558449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e7df60f1c21897e5, Actual=e7df60e1c21897e5 00:25:44.520 [2024-07-22 16:02:48.559325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.560191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.561072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.520 [2024-07-22 16:02:48.561943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.520 [2024-07-22 16:02:48.562817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.520 passed 00:25:44.520 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-22 16:02:48.563688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=dab9859c314c596b 00:25:44.520 [2024-07-22 16:02:48.563959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.520 [2024-07-22 16:02:48.564195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=175, Actual=165 00:25:44.520 [2024-07-22 16:02:48.564420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.564647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.520 [2024-07-22 16:02:48.564857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.520 [2024-07-22 16:02:48.565091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.520 [2024-07-22 16:02:48.565296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.520 [2024-07-22 16:02:48.565489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=35d6 00:25:44.520 [2024-07-22 16:02:48.565676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.521 [2024-07-22 16:02:48.565869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3016470b, Actual=3016471b 00:25:44.521 [2024-07-22 16:02:48.566082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.566313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.566514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.521 [2024-07-22 16:02:48.566724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.521 [2024-07-22 16:02:48.566944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.521 [2024-07-22 16:02:48.567187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=844fd31 00:25:44.521 [2024-07-22 16:02:48.567397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.521 [2024-07-22 16:02:48.567604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=742f411fd249c7f, Actual=742f401fd249c7f 00:25:44.521 [2024-07-22 16:02:48.567814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.568032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.568241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.568439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.568648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.521 passed 00:25:44.521 Test: dix_sec_512_md_0_error ...[2024-07-22 16:02:48.568857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=3a24117c0e7052f1 00:25:44.521 [2024-07-22 16:02:48.568901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smallpassed 00:25:44.521 Test: dix_sec_512_md_8_prchk_0_single_iov ...er than DIF size. 00:25:44.521 passed 00:25:44.521 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:25:44.521 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:25:44.521 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:25:44.521 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:25:44.521 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:25:44.521 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:25:44.521 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:25:44.521 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:25:44.521 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-22 16:02:48.603556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd5c, Actual=fd4c 00:25:44.521 [2024-07-22 16:02:48.604444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=f56e, Actual=f57e 00:25:44.521 [2024-07-22 16:02:48.605322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.606191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.607076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.521 [2024-07-22 16:02:48.607981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.521 [2024-07-22 16:02:48.608862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=3ac 00:25:44.521 [2024-07-22 16:02:48.609736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=c1cd 00:25:44.521 [2024-07-22 16:02:48.610619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753fd, Actual=1ab753ed 00:25:44.521 [2024-07-22 16:02:48.611497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d2206689, Actual=d2206699 00:25:44.521 [2024-07-22 16:02:48.612381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.613252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.614117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.521 [2024-07-22 16:02:48.614981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=100000005a 00:25:44.521 [2024-07-22 16:02:48.615854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=c194175f 00:25:44.521 [2024-07-22 16:02:48.616722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=ea72dcb3 00:25:44.521 [2024-07-22 16:02:48.617578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.521 [2024-07-22 16:02:48.618446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=e7df60f1c21897e5, Actual=e7df60e1c21897e5 00:25:44.521 [2024-07-22 16:02:48.619331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.620188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.621070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.521 [2024-07-22 16:02:48.621927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4a 00:25:44.521 [2024-07-22 16:02:48.622790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.521 passed 00:25:44.521 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-22 16:02:48.623666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=dab9859c314c596b 00:25:44.521 [2024-07-22 16:02:48.623976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd5c, Actual=fd4c 00:25:44.521 [2024-07-22 16:02:48.624199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=175, Actual=165 00:25:44.521 [2024-07-22 16:02:48.624404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.624614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.624823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.625039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.625253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=3ac 00:25:44.521 [2024-07-22 16:02:48.625446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=35d6 00:25:44.521 [2024-07-22 16:02:48.625653] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753fd, Actual=1ab753ed 00:25:44.521 [2024-07-22 16:02:48.625860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3016470b, Actual=3016471b 00:25:44.521 [2024-07-22 16:02:48.626075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.626270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.626487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.521 [2024-07-22 16:02:48.626699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000058 00:25:44.521 [2024-07-22 16:02:48.626906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c194175f 00:25:44.521 [2024-07-22 16:02:48.627124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=844fd31 00:25:44.521 [2024-07-22 16:02:48.627339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7628ecc20d3, Actual=a576a7728ecc20d3 00:25:44.521 [2024-07-22 16:02:48.627540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=742f411fd249c7f, Actual=742f401fd249c7f 00:25:44.521 [2024-07-22 16:02:48.627750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.627956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=98 00:25:44.521 [2024-07-22 16:02:48.628175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.628383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=48 00:25:44.521 [2024-07-22 16:02:48.628596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b5e31b272fa6f97e 00:25:44.521 passed 00:25:44.521 Test: set_md_interleave_iovs_test ...[2024-07-22 16:02:48.628788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=3a24117c0e7052f1 00:25:44.521 passed 00:25:44.521 Test: set_md_interleave_iovs_split_test ...passed 00:25:44.521 Test: dif_generate_stream_pi_16_test ...passed 00:25:44.521 Test: dif_generate_stream_test ...passed 00:25:44.521 Test: set_md_interleave_iovs_alignment_test ...passed[2024-07-22 16:02:48.634908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:25:44.521 00:25:44.521 Test: dif_generate_split_test ...passed 00:25:44.521 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:25:44.521 Test: dif_verify_split_test ...passed 00:25:44.521 Test: dif_verify_stream_multi_segments_test ...passed 00:25:44.521 Test: update_crc32c_pi_16_test ...passed 00:25:44.521 Test: update_crc32c_test ...passed 00:25:44.521 Test: dif_update_crc32c_split_test ...passed 00:25:44.521 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:25:44.521 Test: get_range_with_md_test ...passed 00:25:44.521 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:25:44.521 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:25:44.522 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:25:44.522 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:25:44.522 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:25:44.522 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:25:44.522 Test: dif_generate_and_verify_unmap_test ...passed 00:25:44.522 00:25:44.522 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.522 suites 1 1 n/a 0 0 00:25:44.522 tests 79 79 79 0 0 00:25:44.522 asserts 3584 3584 3584 0 n/a 00:25:44.522 00:25:44.522 Elapsed time = 0.275 seconds 00:25:44.522 16:02:48 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:25:44.522 00:25:44.522 00:25:44.522 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.522 http://cunit.sourceforge.net/ 00:25:44.522 00:25:44.522 00:25:44.522 Suite: iov 00:25:44.522 Test: test_single_iov ...passed 00:25:44.522 Test: test_simple_iov ...passed 00:25:44.522 Test: test_complex_iov ...passed 00:25:44.522 Test: test_iovs_to_buf ...passed 00:25:44.522 Test: test_buf_to_iovs ...passed 00:25:44.522 Test: test_memset ...passed 00:25:44.522 Test: test_iov_one ...passed 00:25:44.522 Test: test_iov_xfer ...passed 00:25:44.522 00:25:44.522 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.522 suites 1 1 n/a 0 0 00:25:44.522 tests 8 8 8 0 0 00:25:44.522 asserts 156 156 156 0 n/a 00:25:44.522 00:25:44.522 Elapsed time = 0.000 seconds 00:25:44.522 16:02:48 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:25:44.522 00:25:44.522 00:25:44.522 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.522 http://cunit.sourceforge.net/ 00:25:44.522 00:25:44.522 00:25:44.522 Suite: math 00:25:44.522 Test: test_serial_number_arithmetic ...passed 00:25:44.522 Suite: erase 00:25:44.522 Test: test_memset_s ...passed 00:25:44.522 00:25:44.522 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.522 suites 2 2 n/a 0 0 00:25:44.522 tests 2 2 2 0 0 00:25:44.522 asserts 18 18 18 0 n/a 00:25:44.522 00:25:44.522 Elapsed time = 0.000 seconds 00:25:44.522 16:02:48 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:25:44.522 00:25:44.522 00:25:44.522 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.522 http://cunit.sourceforge.net/ 00:25:44.522 00:25:44.522 00:25:44.522 Suite: pipe 00:25:44.522 Test: test_create_destroy ...passed 00:25:44.522 Test: test_write_get_buffer ...passed 00:25:44.522 Test: test_write_advance ...passed 00:25:44.522 Test: test_read_get_buffer ...passed 00:25:44.522 Test: test_read_advance ...passed 00:25:44.522 Test: test_data ...passed 00:25:44.522 00:25:44.522 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.522 suites 1 1 n/a 0 0 00:25:44.522 tests 6 6 6 0 0 00:25:44.522 asserts 250 250 250 0 n/a 00:25:44.522 00:25:44.522 Elapsed time = 0.000 seconds 00:25:44.522 16:02:48 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:25:44.522 00:25:44.522 00:25:44.522 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.522 http://cunit.sourceforge.net/ 00:25:44.522 00:25:44.522 00:25:44.522 Suite: xor 00:25:44.780 Test: test_xor_gen ...passed 00:25:44.780 00:25:44.780 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.780 suites 1 1 n/a 0 0 00:25:44.780 tests 1 1 1 0 0 00:25:44.780 asserts 17 17 17 0 n/a 00:25:44.780 00:25:44.780 Elapsed time = 0.007 seconds 00:25:44.780 00:25:44.780 real 0m0.672s 00:25:44.780 user 0m0.459s 00:25:44.780 sys 0m0.217s 00:25:44.780 16:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.780 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:44.780 ************************************ 00:25:44.780 END TEST unittest_util 00:25:44.780 ************************************ 00:25:44.780 16:02:48 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:25:44.780 16:02:48 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:25:44.780 16:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:44.780 16:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.780 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:44.780 ************************************ 00:25:44.780 START TEST unittest_vhost 00:25:44.780 ************************************ 00:25:44.780 16:02:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:25:44.780 00:25:44.780 00:25:44.780 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.780 http://cunit.sourceforge.net/ 00:25:44.780 00:25:44.780 00:25:44.780 Suite: vhost_suite 00:25:44.780 Test: desc_to_iov_test ...[2024-07-22 16:02:48.891444] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:25:44.780 passed 00:25:44.780 Test: create_controller_test ...[2024-07-22 16:02:48.896018] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:25:44.780 [2024-07-22 16:02:48.896110] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:25:44.780 [2024-07-22 16:02:48.896217] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:25:44.780 [2024-07-22 16:02:48.896288] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:25:44.780 [2024-07-22 16:02:48.896317] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:25:44.780 [2024-07-22 16:02:48.896387] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpassed 00:25:44.780 Test: session_find_by_vid_test ...[2024-07-22 16:02:48.897466] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:25:44.780 passed 00:25:44.780 Test: remove_controller_test ...passed 00:25:44.780 Test: vq_avail_ring_get_test ...passed 00:25:44.780 Test: vq_packed_ring_test ...[2024-07-22 16:02:48.899764] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:25:44.780 passed 00:25:44.780 Test: vhost_blk_construct_test ...passed 00:25:44.780 00:25:44.780 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.780 suites 1 1 n/a 0 0 00:25:44.780 tests 7 7 7 0 0 00:25:44.780 asserts 145 145 145 0 n/a 00:25:44.780 00:25:44.780 Elapsed time = 0.012 seconds 00:25:44.780 00:25:44.780 real 0m0.051s 00:25:44.780 user 0m0.028s 00:25:44.780 sys 0m0.024s 00:25:44.780 16:02:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.780 ************************************ 00:25:44.780 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:44.780 END TEST unittest_vhost 00:25:44.780 ************************************ 00:25:44.780 16:02:48 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:25:44.780 16:02:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:44.780 16:02:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:44.780 16:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:44.780 ************************************ 00:25:44.780 START TEST unittest_dma 00:25:44.780 ************************************ 00:25:44.780 16:02:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:25:44.780 00:25:44.780 00:25:44.780 CUnit - A unit testing framework for C - Version 2.1-3 00:25:44.780 http://cunit.sourceforge.net/ 00:25:44.780 00:25:44.780 00:25:44.780 Suite: dma_suite 00:25:44.780 Test: test_dma ...[2024-07-22 16:02:48.990814] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:25:44.780 passed 00:25:44.780 00:25:44.780 Run Summary: Type Total Ran Passed Failed Inactive 00:25:44.780 suites 1 1 n/a 0 0 00:25:44.780 tests 1 1 1 0 0 00:25:44.780 asserts 50 50 50 0 n/a 00:25:44.780 00:25:44.780 Elapsed time = 0.000 seconds 00:25:44.780 00:25:44.780 real 0m0.032s 00:25:44.780 user 0m0.014s 00:25:44.780 sys 0m0.018s 00:25:44.780 16:02:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.780 16:02:49 -- common/autotest_common.sh@10 -- # set +x 00:25:44.780 ************************************ 00:25:44.780 END TEST unittest_dma 00:25:44.780 ************************************ 00:25:45.038 16:02:49 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:25:45.038 16:02:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:45.038 16:02:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.038 16:02:49 -- common/autotest_common.sh@10 -- # set +x 00:25:45.038 ************************************ 00:25:45.038 START TEST unittest_init 00:25:45.038 ************************************ 00:25:45.038 16:02:49 -- common/autotest_common.sh@1104 -- # unittest_init 00:25:45.038 16:02:49 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:25:45.038 00:25:45.038 00:25:45.038 CUnit - A unit testing framework for C - Version 2.1-3 00:25:45.038 http://cunit.sourceforge.net/ 00:25:45.038 00:25:45.038 00:25:45.038 Suite: subsystem_suite 00:25:45.038 Test: subsystem_sort_test_depends_on_single ...passed 00:25:45.038 Test: subsystem_sort_test_depends_on_multiple ...passed 00:25:45.038 Test: subsystem_sort_test_missing_dependency ...[2024-07-22 16:02:49.083008] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:25:45.038 passed 00:25:45.038 00:25:45.038 [2024-07-22 16:02:49.083272] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:25:45.038 Run Summary: Type Total Ran Passed Failed Inactive 00:25:45.038 suites 1 1 n/a 0 0 00:25:45.038 tests 3 3 3 0 0 00:25:45.038 asserts 20 20 20 0 n/a 00:25:45.038 00:25:45.038 Elapsed time = 0.000 seconds 00:25:45.038 00:25:45.038 real 0m0.040s 00:25:45.038 user 0m0.022s 00:25:45.038 sys 0m0.018s 00:25:45.038 16:02:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.038 16:02:49 -- common/autotest_common.sh@10 -- # set +x 00:25:45.038 ************************************ 00:25:45.038 END TEST unittest_init 00:25:45.038 ************************************ 00:25:45.038 16:02:49 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:25:45.038 16:02:49 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:45.038 16:02:49 -- unit/unittest.sh@290 -- # hostname 00:25:45.038 16:02:49 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:25:45.333 geninfo: WARNING: invalid characters removed from testname! 00:26:24.110 16:03:24 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:26:25.482 16:03:29 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:28.836 16:03:32 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:31.378 16:03:35 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:34.673 16:03:38 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:38.018 16:03:41 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:40.614 16:03:44 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:43.206 16:03:46 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:26:43.207 16:03:47 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:26:43.777 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:26:43.777 Found 313 entries. 00:26:43.777 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:26:43.777 Writing .css and .png files. 00:26:43.777 Generating output. 00:26:43.777 Processing file include/linux/virtio_ring.h 00:26:44.035 Processing file include/spdk/nvmf_transport.h 00:26:44.035 Processing file include/spdk/nvme.h 00:26:44.035 Processing file include/spdk/base64.h 00:26:44.035 Processing file include/spdk/nvme_spec.h 00:26:44.035 Processing file include/spdk/thread.h 00:26:44.035 Processing file include/spdk/trace.h 00:26:44.035 Processing file include/spdk/endian.h 00:26:44.035 Processing file include/spdk/util.h 00:26:44.035 Processing file include/spdk/histogram_data.h 00:26:44.035 Processing file include/spdk/bdev_module.h 00:26:44.035 Processing file include/spdk/mmio.h 00:26:44.293 Processing file include/spdk_internal/nvme_tcp.h 00:26:44.293 Processing file include/spdk_internal/sgl.h 00:26:44.293 Processing file include/spdk_internal/rdma.h 00:26:44.293 Processing file include/spdk_internal/virtio.h 00:26:44.293 Processing file include/spdk_internal/utf.h 00:26:44.293 Processing file include/spdk_internal/sock.h 00:26:44.293 Processing file lib/accel/accel.c 00:26:44.293 Processing file lib/accel/accel_rpc.c 00:26:44.293 Processing file lib/accel/accel_sw.c 00:26:44.551 Processing file lib/bdev/bdev_zone.c 00:26:44.551 Processing file lib/bdev/part.c 00:26:44.551 Processing file lib/bdev/scsi_nvme.c 00:26:44.551 Processing file lib/bdev/bdev.c 00:26:44.551 Processing file lib/bdev/bdev_rpc.c 00:26:44.809 Processing file lib/blob/blob_bs_dev.c 00:26:44.809 Processing file lib/blob/blobstore.h 00:26:44.809 Processing file lib/blob/zeroes.c 00:26:44.809 Processing file lib/blob/blobstore.c 00:26:44.809 Processing file lib/blob/request.c 00:26:44.809 Processing file lib/blobfs/blobfs.c 00:26:44.809 Processing file lib/blobfs/tree.c 00:26:44.809 Processing file lib/conf/conf.c 00:26:44.809 Processing file lib/dma/dma.c 00:26:45.067 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:26:45.067 Processing file lib/env_dpdk/sigbus_handler.c 00:26:45.067 Processing file lib/env_dpdk/pci_virtio.c 00:26:45.067 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:26:45.067 Processing file lib/env_dpdk/pci_event.c 00:26:45.067 Processing file lib/env_dpdk/pci_vmd.c 00:26:45.067 Processing file lib/env_dpdk/pci_dpdk.c 00:26:45.067 Processing file lib/env_dpdk/pci_idxd.c 00:26:45.067 Processing file lib/env_dpdk/pci.c 00:26:45.067 Processing file lib/env_dpdk/init.c 00:26:45.067 Processing file lib/env_dpdk/memory.c 00:26:45.067 Processing file lib/env_dpdk/threads.c 00:26:45.067 Processing file lib/env_dpdk/env.c 00:26:45.067 Processing file lib/env_dpdk/pci_ioat.c 00:26:45.325 Processing file lib/event/app_rpc.c 00:26:45.325 Processing file lib/event/reactor.c 00:26:45.325 Processing file lib/event/app.c 00:26:45.325 Processing file lib/event/scheduler_static.c 00:26:45.325 Processing file lib/event/log_rpc.c 00:26:45.890 Processing file lib/ftl/ftl_nv_cache_io.h 00:26:45.890 Processing file lib/ftl/ftl_band_ops.c 00:26:45.890 Processing file lib/ftl/ftl_io.h 00:26:45.890 Processing file lib/ftl/ftl_band.c 00:26:45.890 Processing file lib/ftl/ftl_debug.h 00:26:45.890 Processing file lib/ftl/ftl_debug.c 00:26:45.890 Processing file lib/ftl/ftl_l2p_flat.c 00:26:45.890 Processing file lib/ftl/ftl_rq.c 00:26:45.890 Processing file lib/ftl/ftl_reloc.c 00:26:45.890 Processing file lib/ftl/ftl_core.c 00:26:45.890 Processing file lib/ftl/ftl_init.c 00:26:45.890 Processing file lib/ftl/ftl_l2p.c 00:26:45.890 Processing file lib/ftl/ftl_layout.c 00:26:45.890 Processing file lib/ftl/ftl_writer.h 00:26:45.890 Processing file lib/ftl/ftl_p2l.c 00:26:45.890 Processing file lib/ftl/ftl_core.h 00:26:45.890 Processing file lib/ftl/ftl_sb.c 00:26:45.890 Processing file lib/ftl/ftl_nv_cache.c 00:26:45.890 Processing file lib/ftl/ftl_l2p_cache.c 00:26:45.890 Processing file lib/ftl/ftl_writer.c 00:26:45.890 Processing file lib/ftl/ftl_io.c 00:26:45.890 Processing file lib/ftl/ftl_band.h 00:26:45.890 Processing file lib/ftl/ftl_trace.c 00:26:45.890 Processing file lib/ftl/ftl_nv_cache.h 00:26:45.890 Processing file lib/ftl/base/ftl_base_bdev.c 00:26:45.890 Processing file lib/ftl/base/ftl_base_dev.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:26:46.148 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:26:46.148 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:26:46.148 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:26:46.148 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:26:46.148 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:26:46.148 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:26:46.148 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:26:46.406 Processing file lib/ftl/utils/ftl_property.c 00:26:46.406 Processing file lib/ftl/utils/ftl_conf.c 00:26:46.406 Processing file lib/ftl/utils/ftl_df.h 00:26:46.406 Processing file lib/ftl/utils/ftl_md.c 00:26:46.406 Processing file lib/ftl/utils/ftl_property.h 00:26:46.406 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:26:46.406 Processing file lib/ftl/utils/ftl_addr_utils.h 00:26:46.406 Processing file lib/ftl/utils/ftl_mempool.c 00:26:46.406 Processing file lib/ftl/utils/ftl_bitmap.c 00:26:46.406 Processing file lib/idxd/idxd_user.c 00:26:46.406 Processing file lib/idxd/idxd_kernel.c 00:26:46.406 Processing file lib/idxd/idxd.c 00:26:46.406 Processing file lib/idxd/idxd_internal.h 00:26:46.665 Processing file lib/init/subsystem.c 00:26:46.665 Processing file lib/init/subsystem_rpc.c 00:26:46.665 Processing file lib/init/rpc.c 00:26:46.665 Processing file lib/init/json_config.c 00:26:46.665 Processing file lib/ioat/ioat.c 00:26:46.665 Processing file lib/ioat/ioat_internal.h 00:26:46.923 Processing file lib/iscsi/md5.c 00:26:46.923 Processing file lib/iscsi/init_grp.c 00:26:46.923 Processing file lib/iscsi/iscsi_subsystem.c 00:26:46.923 Processing file lib/iscsi/conn.c 00:26:46.923 Processing file lib/iscsi/task.h 00:26:46.923 Processing file lib/iscsi/iscsi.c 00:26:46.923 Processing file lib/iscsi/iscsi_rpc.c 00:26:46.923 Processing file lib/iscsi/portal_grp.c 00:26:46.923 Processing file lib/iscsi/iscsi.h 00:26:46.923 Processing file lib/iscsi/tgt_node.c 00:26:46.923 Processing file lib/iscsi/task.c 00:26:46.923 Processing file lib/iscsi/param.c 00:26:47.181 Processing file lib/json/json_parse.c 00:26:47.181 Processing file lib/json/json_util.c 00:26:47.181 Processing file lib/json/json_write.c 00:26:47.181 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:26:47.181 Processing file lib/jsonrpc/jsonrpc_client.c 00:26:47.181 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:26:47.181 Processing file lib/jsonrpc/jsonrpc_server.c 00:26:47.181 Processing file lib/log/log_flags.c 00:26:47.181 Processing file lib/log/log_deprecated.c 00:26:47.181 Processing file lib/log/log.c 00:26:47.440 Processing file lib/lvol/lvol.c 00:26:47.440 Processing file lib/nbd/nbd_rpc.c 00:26:47.440 Processing file lib/nbd/nbd.c 00:26:47.440 Processing file lib/notify/notify_rpc.c 00:26:47.440 Processing file lib/notify/notify.c 00:26:48.375 Processing file lib/nvme/nvme_quirks.c 00:26:48.375 Processing file lib/nvme/nvme_internal.h 00:26:48.375 Processing file lib/nvme/nvme_discovery.c 00:26:48.375 Processing file lib/nvme/nvme_transport.c 00:26:48.375 Processing file lib/nvme/nvme_poll_group.c 00:26:48.375 Processing file lib/nvme/nvme_qpair.c 00:26:48.375 Processing file lib/nvme/nvme_cuse.c 00:26:48.375 Processing file lib/nvme/nvme_io_msg.c 00:26:48.375 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:26:48.375 Processing file lib/nvme/nvme.c 00:26:48.375 Processing file lib/nvme/nvme_ns.c 00:26:48.375 Processing file lib/nvme/nvme_pcie_common.c 00:26:48.375 Processing file lib/nvme/nvme_fabric.c 00:26:48.375 Processing file lib/nvme/nvme_opal.c 00:26:48.375 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:26:48.375 Processing file lib/nvme/nvme_pcie.c 00:26:48.375 Processing file lib/nvme/nvme_zns.c 00:26:48.375 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:26:48.375 Processing file lib/nvme/nvme_rdma.c 00:26:48.375 Processing file lib/nvme/nvme_tcp.c 00:26:48.375 Processing file lib/nvme/nvme_ns_cmd.c 00:26:48.375 Processing file lib/nvme/nvme_vfio_user.c 00:26:48.375 Processing file lib/nvme/nvme_ctrlr.c 00:26:48.375 Processing file lib/nvme/nvme_pcie_internal.h 00:26:48.634 Processing file lib/nvmf/ctrlr.c 00:26:48.634 Processing file lib/nvmf/ctrlr_discovery.c 00:26:48.634 Processing file lib/nvmf/ctrlr_bdev.c 00:26:48.634 Processing file lib/nvmf/nvmf_rpc.c 00:26:48.634 Processing file lib/nvmf/nvmf_internal.h 00:26:48.634 Processing file lib/nvmf/nvmf.c 00:26:48.634 Processing file lib/nvmf/subsystem.c 00:26:48.634 Processing file lib/nvmf/rdma.c 00:26:48.634 Processing file lib/nvmf/tcp.c 00:26:48.634 Processing file lib/nvmf/transport.c 00:26:48.896 Processing file lib/rdma/rdma_verbs.c 00:26:48.896 Processing file lib/rdma/common.c 00:26:48.896 Processing file lib/rpc/rpc.c 00:26:49.157 Processing file lib/scsi/scsi_rpc.c 00:26:49.157 Processing file lib/scsi/scsi_pr.c 00:26:49.157 Processing file lib/scsi/lun.c 00:26:49.157 Processing file lib/scsi/port.c 00:26:49.157 Processing file lib/scsi/scsi_bdev.c 00:26:49.157 Processing file lib/scsi/scsi.c 00:26:49.158 Processing file lib/scsi/task.c 00:26:49.158 Processing file lib/scsi/dev.c 00:26:49.158 Processing file lib/sock/sock_rpc.c 00:26:49.158 Processing file lib/sock/sock.c 00:26:49.158 Processing file lib/thread/iobuf.c 00:26:49.158 Processing file lib/thread/thread.c 00:26:49.418 Processing file lib/trace/trace_rpc.c 00:26:49.418 Processing file lib/trace/trace.c 00:26:49.418 Processing file lib/trace/trace_flags.c 00:26:49.418 Processing file lib/trace_parser/trace.cpp 00:26:49.418 Processing file lib/ublk/ublk.c 00:26:49.418 Processing file lib/ublk/ublk_rpc.c 00:26:49.418 Processing file lib/ut/ut.c 00:26:49.677 Processing file lib/ut_mock/mock.c 00:26:49.936 Processing file lib/util/bit_array.c 00:26:49.936 Processing file lib/util/strerror_tls.c 00:26:49.936 Processing file lib/util/iov.c 00:26:49.936 Processing file lib/util/fd.c 00:26:49.936 Processing file lib/util/xor.c 00:26:49.936 Processing file lib/util/crc16.c 00:26:49.936 Processing file lib/util/crc32.c 00:26:49.936 Processing file lib/util/pipe.c 00:26:49.936 Processing file lib/util/crc64.c 00:26:49.936 Processing file lib/util/crc32c.c 00:26:49.936 Processing file lib/util/file.c 00:26:49.936 Processing file lib/util/hexlify.c 00:26:49.936 Processing file lib/util/math.c 00:26:49.936 Processing file lib/util/fd_group.c 00:26:49.936 Processing file lib/util/string.c 00:26:49.936 Processing file lib/util/base64.c 00:26:49.936 Processing file lib/util/crc32_ieee.c 00:26:49.936 Processing file lib/util/dif.c 00:26:49.936 Processing file lib/util/cpuset.c 00:26:49.936 Processing file lib/util/zipf.c 00:26:49.936 Processing file lib/util/uuid.c 00:26:49.936 Processing file lib/vfio_user/host/vfio_user.c 00:26:49.936 Processing file lib/vfio_user/host/vfio_user_pci.c 00:26:50.196 Processing file lib/vhost/vhost_internal.h 00:26:50.196 Processing file lib/vhost/vhost_rpc.c 00:26:50.196 Processing file lib/vhost/vhost_scsi.c 00:26:50.196 Processing file lib/vhost/vhost.c 00:26:50.196 Processing file lib/vhost/rte_vhost_user.c 00:26:50.196 Processing file lib/vhost/vhost_blk.c 00:26:50.196 Processing file lib/virtio/virtio_pci.c 00:26:50.196 Processing file lib/virtio/virtio_vhost_user.c 00:26:50.196 Processing file lib/virtio/virtio_vfio_user.c 00:26:50.196 Processing file lib/virtio/virtio.c 00:26:50.455 Processing file lib/vmd/vmd.c 00:26:50.455 Processing file lib/vmd/led.c 00:26:50.455 Processing file module/accel/dsa/accel_dsa_rpc.c 00:26:50.455 Processing file module/accel/dsa/accel_dsa.c 00:26:50.455 Processing file module/accel/error/accel_error_rpc.c 00:26:50.455 Processing file module/accel/error/accel_error.c 00:26:50.714 Processing file module/accel/iaa/accel_iaa.c 00:26:50.714 Processing file module/accel/iaa/accel_iaa_rpc.c 00:26:50.714 Processing file module/accel/ioat/accel_ioat_rpc.c 00:26:50.714 Processing file module/accel/ioat/accel_ioat.c 00:26:50.714 Processing file module/bdev/aio/bdev_aio.c 00:26:50.714 Processing file module/bdev/aio/bdev_aio_rpc.c 00:26:50.714 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:26:50.714 Processing file module/bdev/delay/vbdev_delay.c 00:26:50.986 Processing file module/bdev/error/vbdev_error.c 00:26:50.986 Processing file module/bdev/error/vbdev_error_rpc.c 00:26:50.986 Processing file module/bdev/ftl/bdev_ftl.c 00:26:50.986 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:26:50.986 Processing file module/bdev/gpt/gpt.c 00:26:50.986 Processing file module/bdev/gpt/gpt.h 00:26:50.986 Processing file module/bdev/gpt/vbdev_gpt.c 00:26:50.986 Processing file module/bdev/iscsi/bdev_iscsi.c 00:26:50.986 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:26:51.245 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:26:51.245 Processing file module/bdev/lvol/vbdev_lvol.c 00:26:51.245 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:26:51.245 Processing file module/bdev/malloc/bdev_malloc.c 00:26:51.245 Processing file module/bdev/null/bdev_null_rpc.c 00:26:51.245 Processing file module/bdev/null/bdev_null.c 00:26:51.504 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:26:51.504 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:26:51.504 Processing file module/bdev/nvme/nvme_rpc.c 00:26:51.504 Processing file module/bdev/nvme/bdev_mdns_client.c 00:26:51.504 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:26:51.504 Processing file module/bdev/nvme/bdev_nvme.c 00:26:51.504 Processing file module/bdev/nvme/vbdev_opal.c 00:26:51.763 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:26:51.763 Processing file module/bdev/passthru/vbdev_passthru.c 00:26:52.022 Processing file module/bdev/raid/raid5f.c 00:26:52.022 Processing file module/bdev/raid/bdev_raid_rpc.c 00:26:52.022 Processing file module/bdev/raid/raid1.c 00:26:52.022 Processing file module/bdev/raid/bdev_raid.h 00:26:52.022 Processing file module/bdev/raid/bdev_raid.c 00:26:52.022 Processing file module/bdev/raid/bdev_raid_sb.c 00:26:52.022 Processing file module/bdev/raid/concat.c 00:26:52.022 Processing file module/bdev/raid/raid0.c 00:26:52.022 Processing file module/bdev/split/vbdev_split_rpc.c 00:26:52.022 Processing file module/bdev/split/vbdev_split.c 00:26:52.022 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:26:52.022 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:26:52.022 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:26:52.281 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:26:52.281 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:26:52.281 Processing file module/blob/bdev/blob_bdev.c 00:26:52.281 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:26:52.281 Processing file module/blobfs/bdev/blobfs_bdev.c 00:26:52.281 Processing file module/env_dpdk/env_dpdk_rpc.c 00:26:52.281 Processing file module/event/subsystems/accel/accel.c 00:26:52.540 Processing file module/event/subsystems/bdev/bdev.c 00:26:52.540 Processing file module/event/subsystems/iobuf/iobuf.c 00:26:52.540 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:26:52.540 Processing file module/event/subsystems/iscsi/iscsi.c 00:26:52.540 Processing file module/event/subsystems/nbd/nbd.c 00:26:52.540 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:26:52.540 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:26:52.798 Processing file module/event/subsystems/scheduler/scheduler.c 00:26:52.798 Processing file module/event/subsystems/scsi/scsi.c 00:26:52.798 Processing file module/event/subsystems/sock/sock.c 00:26:52.798 Processing file module/event/subsystems/ublk/ublk.c 00:26:52.798 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:26:52.798 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:26:53.056 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:26:53.056 Processing file module/event/subsystems/vmd/vmd.c 00:26:53.056 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:26:53.056 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:26:53.056 Processing file module/scheduler/gscheduler/gscheduler.c 00:26:53.056 Processing file module/sock/sock_kernel.h 00:26:53.315 Processing file module/sock/posix/posix.c 00:26:53.315 Writing directory view page. 00:26:53.315 Overall coverage rate: 00:26:53.315 lines......: 38.6% (39266 of 101727 lines) 00:26:53.315 functions..: 42.2% (3587 of 8494 functions) 00:26:53.315 00:26:53.315 00:26:53.315 ===================== 00:26:53.315 All unit tests passed 00:26:53.315 ===================== 00:26:53.315 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:26:53.315 16:03:57 -- unit/unittest.sh@302 -- # set +x 00:26:53.315 00:26:53.315 00:26:53.315 00:26:53.315 real 3m42.326s 00:26:53.315 user 3m14.774s 00:26:53.315 sys 0m18.847s 00:26:53.315 16:03:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.315 ************************************ 00:26:53.315 END TEST unittest 00:26:53.315 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.315 ************************************ 00:26:53.315 16:03:57 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:26:53.315 16:03:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:26:53.315 16:03:57 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:26:53.315 16:03:57 -- spdk/autotest.sh@173 -- # timing_enter lib 00:26:53.315 16:03:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:53.315 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.315 16:03:57 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:53.315 16:03:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:53.315 16:03:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.315 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.315 ************************************ 00:26:53.315 START TEST env 00:26:53.315 ************************************ 00:26:53.315 16:03:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:26:53.315 * Looking for test storage... 00:26:53.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:26:53.315 16:03:57 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:53.315 16:03:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:53.315 16:03:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.315 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.315 ************************************ 00:26:53.315 START TEST env_memory 00:26:53.315 ************************************ 00:26:53.315 16:03:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:26:53.315 00:26:53.315 00:26:53.315 CUnit - A unit testing framework for C - Version 2.1-3 00:26:53.315 http://cunit.sourceforge.net/ 00:26:53.315 00:26:53.315 00:26:53.315 Suite: memory 00:26:53.574 Test: alloc and free memory map ...[2024-07-22 16:03:57.606074] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:26:53.574 passed 00:26:53.574 Test: mem map translation ...[2024-07-22 16:03:57.669289] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:26:53.574 [2024-07-22 16:03:57.669405] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:26:53.574 [2024-07-22 16:03:57.669548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:26:53.574 [2024-07-22 16:03:57.669586] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:26:53.574 passed 00:26:53.574 Test: mem map registration ...[2024-07-22 16:03:57.769853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:26:53.574 [2024-07-22 16:03:57.769952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:26:53.574 passed 00:26:53.833 Test: mem map adjacent registrations ...passed 00:26:53.833 00:26:53.833 Run Summary: Type Total Ran Passed Failed Inactive 00:26:53.833 suites 1 1 n/a 0 0 00:26:53.833 tests 4 4 4 0 0 00:26:53.833 asserts 152 152 152 0 n/a 00:26:53.833 00:26:53.833 Elapsed time = 0.342 seconds 00:26:53.833 00:26:53.833 real 0m0.376s 00:26:53.833 user 0m0.355s 00:26:53.833 sys 0m0.021s 00:26:53.833 16:03:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.833 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.833 ************************************ 00:26:53.833 END TEST env_memory 00:26:53.833 ************************************ 00:26:53.833 16:03:57 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:53.833 16:03:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:53.833 16:03:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:53.833 16:03:57 -- common/autotest_common.sh@10 -- # set +x 00:26:53.833 ************************************ 00:26:53.833 START TEST env_vtophys 00:26:53.833 ************************************ 00:26:53.833 16:03:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:26:53.833 EAL: lib.eal log level changed from notice to debug 00:26:53.833 EAL: Detected lcore 0 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 1 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 2 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 3 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 4 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 5 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 6 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 7 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 8 as core 0 on socket 0 00:26:53.833 EAL: Detected lcore 9 as core 0 on socket 0 00:26:53.833 EAL: Maximum logical cores by configuration: 128 00:26:53.833 EAL: Detected CPU lcores: 10 00:26:53.833 EAL: Detected NUMA nodes: 1 00:26:53.833 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:26:53.833 EAL: Checking presence of .so 'librte_eal.so.24' 00:26:53.833 EAL: Checking presence of .so 'librte_eal.so' 00:26:53.833 EAL: Detected static linkage of DPDK 00:26:53.833 EAL: No shared files mode enabled, IPC will be disabled 00:26:53.833 EAL: Selected IOVA mode 'PA' 00:26:53.833 EAL: Probing VFIO support... 00:26:53.833 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:53.833 EAL: VFIO modules not loaded, skipping VFIO support... 00:26:53.833 EAL: Ask a virtual area of 0x2e000 bytes 00:26:53.833 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:26:53.833 EAL: Setting up physically contiguous memory... 00:26:53.833 EAL: Setting maximum number of open files to 1048576 00:26:53.833 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:26:53.833 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:26:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:26:53.833 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:26:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:26:53.833 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:26:53.833 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:26:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:26:53.833 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:26:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:26:53.833 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:26:53.833 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:26:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:26:53.833 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:26:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:26:53.833 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:26:53.833 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:26:53.833 EAL: Ask a virtual area of 0x61000 bytes 00:26:53.833 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:26:53.833 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:26:53.833 EAL: Ask a virtual area of 0x400000000 bytes 00:26:53.833 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:26:53.833 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:26:53.833 EAL: Hugepages will be freed exactly as allocated. 00:26:53.833 EAL: No shared files mode enabled, IPC is disabled 00:26:53.833 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: TSC frequency is ~2200000 KHz 00:26:54.092 EAL: Main lcore 0 is ready (tid=79e4aee45a80;cpuset=[0]) 00:26:54.092 EAL: Trying to obtain current memory policy. 00:26:54.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.092 EAL: Restoring previous memory policy: 0 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was expanded by 2MB 00:26:54.092 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:26:54.092 EAL: Mem event callback 'spdk:(nil)' registered 00:26:54.092 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:26:54.092 00:26:54.092 00:26:54.092 CUnit - A unit testing framework for C - Version 2.1-3 00:26:54.092 http://cunit.sourceforge.net/ 00:26:54.092 00:26:54.092 00:26:54.092 Suite: components_suite 00:26:54.092 Test: vtophys_malloc_test ...passed 00:26:54.092 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:26:54.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.092 EAL: Restoring previous memory policy: 4 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was expanded by 4MB 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was shrunk by 4MB 00:26:54.092 EAL: Trying to obtain current memory policy. 00:26:54.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.092 EAL: Restoring previous memory policy: 4 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was expanded by 6MB 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was shrunk by 6MB 00:26:54.092 EAL: Trying to obtain current memory policy. 00:26:54.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.092 EAL: Restoring previous memory policy: 4 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was expanded by 10MB 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was shrunk by 10MB 00:26:54.092 EAL: Trying to obtain current memory policy. 00:26:54.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.092 EAL: Restoring previous memory policy: 4 00:26:54.092 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.092 EAL: request: mp_malloc_sync 00:26:54.092 EAL: No shared files mode enabled, IPC is disabled 00:26:54.092 EAL: Heap on socket 0 was expanded by 18MB 00:26:54.350 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.350 EAL: request: mp_malloc_sync 00:26:54.350 EAL: No shared files mode enabled, IPC is disabled 00:26:54.350 EAL: Heap on socket 0 was shrunk by 18MB 00:26:54.350 EAL: Trying to obtain current memory policy. 00:26:54.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.350 EAL: Restoring previous memory policy: 4 00:26:54.350 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.350 EAL: request: mp_malloc_sync 00:26:54.350 EAL: No shared files mode enabled, IPC is disabled 00:26:54.350 EAL: Heap on socket 0 was expanded by 34MB 00:26:54.350 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.350 EAL: request: mp_malloc_sync 00:26:54.350 EAL: No shared files mode enabled, IPC is disabled 00:26:54.350 EAL: Heap on socket 0 was shrunk by 34MB 00:26:54.350 EAL: Trying to obtain current memory policy. 00:26:54.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.350 EAL: Restoring previous memory policy: 4 00:26:54.350 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.350 EAL: request: mp_malloc_sync 00:26:54.350 EAL: No shared files mode enabled, IPC is disabled 00:26:54.350 EAL: Heap on socket 0 was expanded by 66MB 00:26:54.609 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.609 EAL: request: mp_malloc_sync 00:26:54.609 EAL: No shared files mode enabled, IPC is disabled 00:26:54.609 EAL: Heap on socket 0 was shrunk by 66MB 00:26:54.609 EAL: Trying to obtain current memory policy. 00:26:54.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:54.609 EAL: Restoring previous memory policy: 4 00:26:54.609 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.609 EAL: request: mp_malloc_sync 00:26:54.609 EAL: No shared files mode enabled, IPC is disabled 00:26:54.609 EAL: Heap on socket 0 was expanded by 130MB 00:26:54.867 EAL: Calling mem event callback 'spdk:(nil)' 00:26:54.867 EAL: request: mp_malloc_sync 00:26:54.867 EAL: No shared files mode enabled, IPC is disabled 00:26:54.867 EAL: Heap on socket 0 was shrunk by 130MB 00:26:55.125 EAL: Trying to obtain current memory policy. 00:26:55.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:55.383 EAL: Restoring previous memory policy: 4 00:26:55.383 EAL: Calling mem event callback 'spdk:(nil)' 00:26:55.383 EAL: request: mp_malloc_sync 00:26:55.383 EAL: No shared files mode enabled, IPC is disabled 00:26:55.383 EAL: Heap on socket 0 was expanded by 258MB 00:26:55.642 EAL: Calling mem event callback 'spdk:(nil)' 00:26:55.899 EAL: request: mp_malloc_sync 00:26:55.899 EAL: No shared files mode enabled, IPC is disabled 00:26:55.899 EAL: Heap on socket 0 was shrunk by 258MB 00:26:56.157 EAL: Trying to obtain current memory policy. 00:26:56.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:56.419 EAL: Restoring previous memory policy: 4 00:26:56.419 EAL: Calling mem event callback 'spdk:(nil)' 00:26:56.419 EAL: request: mp_malloc_sync 00:26:56.419 EAL: No shared files mode enabled, IPC is disabled 00:26:56.419 EAL: Heap on socket 0 was expanded by 514MB 00:26:57.794 EAL: Calling mem event callback 'spdk:(nil)' 00:26:57.794 EAL: request: mp_malloc_sync 00:26:57.794 EAL: No shared files mode enabled, IPC is disabled 00:26:57.794 EAL: Heap on socket 0 was shrunk by 514MB 00:26:58.366 EAL: Trying to obtain current memory policy. 00:26:58.366 EAL: Setting policy MPOL_PREFERRED for socket 0 00:26:58.624 EAL: Restoring previous memory policy: 4 00:26:58.624 EAL: Calling mem event callback 'spdk:(nil)' 00:26:58.624 EAL: request: mp_malloc_sync 00:26:58.624 EAL: No shared files mode enabled, IPC is disabled 00:26:58.624 EAL: Heap on socket 0 was expanded by 1026MB 00:27:00.528 EAL: Calling mem event callback 'spdk:(nil)' 00:27:00.786 EAL: request: mp_malloc_sync 00:27:00.786 EAL: No shared files mode enabled, IPC is disabled 00:27:00.786 EAL: Heap on socket 0 was shrunk by 1026MB 00:27:02.160 passed 00:27:02.160 00:27:02.160 Run Summary: Type Total Ran Passed Failed Inactive 00:27:02.160 suites 1 1 n/a 0 0 00:27:02.160 tests 2 2 2 0 0 00:27:02.160 asserts 5488 5488 5488 0 n/a 00:27:02.160 00:27:02.160 Elapsed time = 8.090 seconds 00:27:02.160 EAL: Calling mem event callback 'spdk:(nil)' 00:27:02.160 EAL: request: mp_malloc_sync 00:27:02.160 EAL: No shared files mode enabled, IPC is disabled 00:27:02.160 EAL: Heap on socket 0 was shrunk by 2MB 00:27:02.160 EAL: No shared files mode enabled, IPC is disabled 00:27:02.160 EAL: No shared files mode enabled, IPC is disabled 00:27:02.160 EAL: No shared files mode enabled, IPC is disabled 00:27:02.160 00:27:02.160 real 0m8.370s 00:27:02.160 user 0m7.011s 00:27:02.160 sys 0m1.233s 00:27:02.160 ************************************ 00:27:02.160 END TEST env_vtophys 00:27:02.160 ************************************ 00:27:02.160 16:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.160 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.160 16:04:06 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:02.160 16:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:02.160 16:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:02.160 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.160 ************************************ 00:27:02.160 START TEST env_pci 00:27:02.160 ************************************ 00:27:02.160 16:04:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:02.160 00:27:02.160 00:27:02.160 CUnit - A unit testing framework for C - Version 2.1-3 00:27:02.160 http://cunit.sourceforge.net/ 00:27:02.160 00:27:02.160 00:27:02.160 Suite: pci 00:27:02.160 Test: pci_hook ...[2024-07-22 16:04:06.426465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61117 has claimed it 00:27:02.418 EAL: Cannot find device (10000:00:01.0) 00:27:02.418 EAL: Failed to attach device on primary process 00:27:02.418 passed 00:27:02.418 00:27:02.418 Run Summary: Type Total Ran Passed Failed Inactive 00:27:02.418 suites 1 1 n/a 0 0 00:27:02.418 tests 1 1 1 0 0 00:27:02.418 asserts 25 25 25 0 n/a 00:27:02.418 00:27:02.418 Elapsed time = 0.009 seconds 00:27:02.418 00:27:02.418 real 0m0.089s 00:27:02.418 user 0m0.039s 00:27:02.418 sys 0m0.050s 00:27:02.418 16:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.418 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.418 ************************************ 00:27:02.418 END TEST env_pci 00:27:02.418 ************************************ 00:27:02.418 16:04:06 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:27:02.418 16:04:06 -- env/env.sh@15 -- # uname 00:27:02.418 16:04:06 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:27:02.418 16:04:06 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:27:02.418 16:04:06 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:02.418 16:04:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:02.418 16:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:02.418 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.418 ************************************ 00:27:02.418 START TEST env_dpdk_post_init 00:27:02.418 ************************************ 00:27:02.418 16:04:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:02.418 EAL: Detected CPU lcores: 10 00:27:02.418 EAL: Detected NUMA nodes: 1 00:27:02.418 EAL: Detected static linkage of DPDK 00:27:02.418 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:02.418 EAL: Selected IOVA mode 'PA' 00:27:02.677 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:02.677 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:27:02.677 Starting DPDK initialization... 00:27:02.677 Starting SPDK post initialization... 00:27:02.677 SPDK NVMe probe 00:27:02.677 Attaching to 0000:00:06.0 00:27:02.677 Attached to 0000:00:06.0 00:27:02.677 Cleaning up... 00:27:02.677 00:27:02.677 real 0m0.283s 00:27:02.677 user 0m0.085s 00:27:02.677 sys 0m0.101s 00:27:02.677 16:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.677 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.677 ************************************ 00:27:02.677 END TEST env_dpdk_post_init 00:27:02.677 ************************************ 00:27:02.677 16:04:06 -- env/env.sh@26 -- # uname 00:27:02.677 16:04:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:27:02.677 16:04:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:02.677 16:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:02.677 16:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:02.677 16:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.677 ************************************ 00:27:02.677 START TEST env_mem_callbacks 00:27:02.677 ************************************ 00:27:02.677 16:04:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:02.677 EAL: Detected CPU lcores: 10 00:27:02.677 EAL: Detected NUMA nodes: 1 00:27:02.677 EAL: Detected static linkage of DPDK 00:27:02.935 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:02.935 EAL: Selected IOVA mode 'PA' 00:27:02.935 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:02.935 00:27:02.935 00:27:02.935 CUnit - A unit testing framework for C - Version 2.1-3 00:27:02.935 http://cunit.sourceforge.net/ 00:27:02.935 00:27:02.935 00:27:02.935 Suite: memory 00:27:02.935 Test: test ... 00:27:02.935 register 0x200000200000 2097152 00:27:02.935 malloc 3145728 00:27:02.935 register 0x200000400000 4194304 00:27:02.935 buf 0x2000004fffc0 len 3145728 PASSED 00:27:02.935 malloc 64 00:27:02.935 buf 0x2000004ffec0 len 64 PASSED 00:27:02.935 malloc 4194304 00:27:02.935 register 0x200000800000 6291456 00:27:02.935 buf 0x2000009fffc0 len 4194304 PASSED 00:27:02.935 free 0x2000004fffc0 3145728 00:27:02.935 free 0x2000004ffec0 64 00:27:02.935 unregister 0x200000400000 4194304 PASSED 00:27:02.935 free 0x2000009fffc0 4194304 00:27:02.935 unregister 0x200000800000 6291456 PASSED 00:27:02.935 malloc 8388608 00:27:02.935 register 0x200000400000 10485760 00:27:02.935 buf 0x2000005fffc0 len 8388608 PASSED 00:27:02.935 free 0x2000005fffc0 8388608 00:27:02.935 unregister 0x200000400000 10485760 PASSED 00:27:02.935 passed 00:27:02.935 00:27:02.935 Run Summary: Type Total Ran Passed Failed Inactive 00:27:02.935 suites 1 1 n/a 0 0 00:27:02.935 tests 1 1 1 0 0 00:27:02.935 asserts 15 15 15 0 n/a 00:27:02.935 00:27:02.935 Elapsed time = 0.079 seconds 00:27:02.935 ************************************ 00:27:02.935 END TEST env_mem_callbacks 00:27:02.935 ************************************ 00:27:02.935 00:27:02.935 real 0m0.309s 00:27:02.935 user 0m0.127s 00:27:02.935 sys 0m0.079s 00:27:02.935 16:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.935 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.193 ************************************ 00:27:03.193 END TEST env 00:27:03.193 ************************************ 00:27:03.193 00:27:03.193 real 0m9.773s 00:27:03.193 user 0m7.717s 00:27:03.193 sys 0m1.723s 00:27:03.193 16:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.193 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.193 16:04:07 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:03.193 16:04:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:03.193 16:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.193 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.193 ************************************ 00:27:03.193 START TEST rpc 00:27:03.193 ************************************ 00:27:03.193 16:04:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:03.193 * Looking for test storage... 00:27:03.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:03.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.193 16:04:07 -- rpc/rpc.sh@65 -- # spdk_pid=61235 00:27:03.193 16:04:07 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:27:03.193 16:04:07 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:03.193 16:04:07 -- rpc/rpc.sh@67 -- # waitforlisten 61235 00:27:03.193 16:04:07 -- common/autotest_common.sh@819 -- # '[' -z 61235 ']' 00:27:03.193 16:04:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.193 16:04:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.193 16:04:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.193 16:04:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.193 16:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.193 [2024-07-22 16:04:07.456386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:03.193 [2024-07-22 16:04:07.456879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61235 ] 00:27:03.450 [2024-07-22 16:04:07.635823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.711 [2024-07-22 16:04:07.937601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:03.711 [2024-07-22 16:04:07.937867] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:27:03.711 [2024-07-22 16:04:07.937896] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61235' to capture a snapshot of events at runtime. 00:27:03.711 [2024-07-22 16:04:07.937913] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61235 for offline analysis/debug. 00:27:03.711 [2024-07-22 16:04:07.937972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.084 16:04:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:05.084 16:04:09 -- common/autotest_common.sh@852 -- # return 0 00:27:05.084 16:04:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:05.084 16:04:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:05.084 16:04:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:27:05.084 16:04:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:27:05.084 16:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:05.084 16:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 ************************************ 00:27:05.084 START TEST rpc_integrity 00:27:05.084 ************************************ 00:27:05.084 16:04:09 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:27:05.084 16:04:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:05.084 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.084 16:04:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:05.084 16:04:09 -- rpc/rpc.sh@13 -- # jq length 00:27:05.084 16:04:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:05.084 16:04:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:05.084 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.084 16:04:09 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:27:05.084 16:04:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:05.084 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.084 16:04:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:05.084 { 00:27:05.084 "name": "Malloc0", 00:27:05.084 "aliases": [ 00:27:05.084 "e7c60f48-44ae-4b35-8064-be8eb6d1a656" 00:27:05.084 ], 00:27:05.084 "product_name": "Malloc disk", 00:27:05.084 "block_size": 512, 00:27:05.084 "num_blocks": 16384, 00:27:05.084 "uuid": "e7c60f48-44ae-4b35-8064-be8eb6d1a656", 00:27:05.084 "assigned_rate_limits": { 00:27:05.084 "rw_ios_per_sec": 0, 00:27:05.084 "rw_mbytes_per_sec": 0, 00:27:05.084 "r_mbytes_per_sec": 0, 00:27:05.084 "w_mbytes_per_sec": 0 00:27:05.084 }, 00:27:05.084 "claimed": false, 00:27:05.084 "zoned": false, 00:27:05.084 "supported_io_types": { 00:27:05.084 "read": true, 00:27:05.084 "write": true, 00:27:05.084 "unmap": true, 00:27:05.084 "write_zeroes": true, 00:27:05.084 "flush": true, 00:27:05.084 "reset": true, 00:27:05.084 "compare": false, 00:27:05.084 "compare_and_write": false, 00:27:05.084 "abort": true, 00:27:05.084 "nvme_admin": false, 00:27:05.084 "nvme_io": false 00:27:05.084 }, 00:27:05.084 "memory_domains": [ 00:27:05.084 { 00:27:05.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.084 "dma_device_type": 2 00:27:05.084 } 00:27:05.084 ], 00:27:05.084 "driver_specific": {} 00:27:05.084 } 00:27:05.084 ]' 00:27:05.084 16:04:09 -- rpc/rpc.sh@17 -- # jq length 00:27:05.084 16:04:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:05.084 16:04:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:27:05.084 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 [2024-07-22 16:04:09.237321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:27:05.084 [2024-07-22 16:04:09.237432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.084 [2024-07-22 16:04:09.237484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:27:05.084 [2024-07-22 16:04:09.237506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.084 [2024-07-22 16:04:09.240805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.084 [2024-07-22 16:04:09.240861] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:05.084 Passthru0 00:27:05.084 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.084 16:04:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:05.084 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.084 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.084 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.084 16:04:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:05.084 { 00:27:05.084 "name": "Malloc0", 00:27:05.084 "aliases": [ 00:27:05.084 "e7c60f48-44ae-4b35-8064-be8eb6d1a656" 00:27:05.084 ], 00:27:05.084 "product_name": "Malloc disk", 00:27:05.084 "block_size": 512, 00:27:05.084 "num_blocks": 16384, 00:27:05.084 "uuid": "e7c60f48-44ae-4b35-8064-be8eb6d1a656", 00:27:05.084 "assigned_rate_limits": { 00:27:05.084 "rw_ios_per_sec": 0, 00:27:05.084 "rw_mbytes_per_sec": 0, 00:27:05.084 "r_mbytes_per_sec": 0, 00:27:05.084 "w_mbytes_per_sec": 0 00:27:05.085 }, 00:27:05.085 "claimed": true, 00:27:05.085 "claim_type": "exclusive_write", 00:27:05.085 "zoned": false, 00:27:05.085 "supported_io_types": { 00:27:05.085 "read": true, 00:27:05.085 "write": true, 00:27:05.085 "unmap": true, 00:27:05.085 "write_zeroes": true, 00:27:05.085 "flush": true, 00:27:05.085 "reset": true, 00:27:05.085 "compare": false, 00:27:05.085 "compare_and_write": false, 00:27:05.085 "abort": true, 00:27:05.085 "nvme_admin": false, 00:27:05.085 "nvme_io": false 00:27:05.085 }, 00:27:05.085 "memory_domains": [ 00:27:05.085 { 00:27:05.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.085 "dma_device_type": 2 00:27:05.085 } 00:27:05.085 ], 00:27:05.085 "driver_specific": {} 00:27:05.085 }, 00:27:05.085 { 00:27:05.085 "name": "Passthru0", 00:27:05.085 "aliases": [ 00:27:05.085 "7958d8f1-4fb7-5ff9-8c59-a553974c81be" 00:27:05.085 ], 00:27:05.085 "product_name": "passthru", 00:27:05.085 "block_size": 512, 00:27:05.085 "num_blocks": 16384, 00:27:05.085 "uuid": "7958d8f1-4fb7-5ff9-8c59-a553974c81be", 00:27:05.085 "assigned_rate_limits": { 00:27:05.085 "rw_ios_per_sec": 0, 00:27:05.085 "rw_mbytes_per_sec": 0, 00:27:05.085 "r_mbytes_per_sec": 0, 00:27:05.085 "w_mbytes_per_sec": 0 00:27:05.085 }, 00:27:05.085 "claimed": false, 00:27:05.085 "zoned": false, 00:27:05.085 "supported_io_types": { 00:27:05.085 "read": true, 00:27:05.085 "write": true, 00:27:05.085 "unmap": true, 00:27:05.085 "write_zeroes": true, 00:27:05.085 "flush": true, 00:27:05.085 "reset": true, 00:27:05.085 "compare": false, 00:27:05.085 "compare_and_write": false, 00:27:05.085 "abort": true, 00:27:05.085 "nvme_admin": false, 00:27:05.085 "nvme_io": false 00:27:05.085 }, 00:27:05.085 "memory_domains": [ 00:27:05.085 { 00:27:05.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.085 "dma_device_type": 2 00:27:05.085 } 00:27:05.085 ], 00:27:05.085 "driver_specific": { 00:27:05.085 "passthru": { 00:27:05.085 "name": "Passthru0", 00:27:05.085 "base_bdev_name": "Malloc0" 00:27:05.085 } 00:27:05.085 } 00:27:05.085 } 00:27:05.085 ]' 00:27:05.085 16:04:09 -- rpc/rpc.sh@21 -- # jq length 00:27:05.085 16:04:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:05.085 16:04:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:05.085 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.085 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.085 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.085 16:04:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:05.085 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.085 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.085 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.085 16:04:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:05.085 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.085 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.085 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.085 16:04:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:05.085 16:04:09 -- rpc/rpc.sh@26 -- # jq length 00:27:05.085 ************************************ 00:27:05.085 END TEST rpc_integrity 00:27:05.085 ************************************ 00:27:05.085 16:04:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:05.085 00:27:05.085 real 0m0.190s 00:27:05.085 user 0m0.052s 00:27:05.085 sys 0m0.041s 00:27:05.085 16:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.085 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:27:05.344 16:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:05.344 16:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 ************************************ 00:27:05.344 START TEST rpc_plugins 00:27:05.344 ************************************ 00:27:05.344 16:04:09 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:27:05.344 16:04:09 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:27:05.344 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.344 16:04:09 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:27:05.344 16:04:09 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:27:05.344 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.344 16:04:09 -- rpc/rpc.sh@31 -- # bdevs='[ 00:27:05.344 { 00:27:05.344 "name": "Malloc1", 00:27:05.344 "aliases": [ 00:27:05.344 "da7b65e6-ca72-4526-9e29-2b4bd0c81281" 00:27:05.344 ], 00:27:05.344 "product_name": "Malloc disk", 00:27:05.344 "block_size": 4096, 00:27:05.344 "num_blocks": 256, 00:27:05.344 "uuid": "da7b65e6-ca72-4526-9e29-2b4bd0c81281", 00:27:05.344 "assigned_rate_limits": { 00:27:05.344 "rw_ios_per_sec": 0, 00:27:05.344 "rw_mbytes_per_sec": 0, 00:27:05.344 "r_mbytes_per_sec": 0, 00:27:05.344 "w_mbytes_per_sec": 0 00:27:05.344 }, 00:27:05.344 "claimed": false, 00:27:05.344 "zoned": false, 00:27:05.344 "supported_io_types": { 00:27:05.344 "read": true, 00:27:05.344 "write": true, 00:27:05.344 "unmap": true, 00:27:05.344 "write_zeroes": true, 00:27:05.344 "flush": true, 00:27:05.344 "reset": true, 00:27:05.344 "compare": false, 00:27:05.344 "compare_and_write": false, 00:27:05.344 "abort": true, 00:27:05.344 "nvme_admin": false, 00:27:05.344 "nvme_io": false 00:27:05.344 }, 00:27:05.344 "memory_domains": [ 00:27:05.344 { 00:27:05.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.344 "dma_device_type": 2 00:27:05.344 } 00:27:05.344 ], 00:27:05.344 "driver_specific": {} 00:27:05.344 } 00:27:05.344 ]' 00:27:05.344 16:04:09 -- rpc/rpc.sh@32 -- # jq length 00:27:05.344 16:04:09 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:27:05.344 16:04:09 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:27:05.344 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.344 16:04:09 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:27:05.344 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.344 16:04:09 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:27:05.344 16:04:09 -- rpc/rpc.sh@36 -- # jq length 00:27:05.344 ************************************ 00:27:05.344 END TEST rpc_plugins 00:27:05.344 ************************************ 00:27:05.344 16:04:09 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:27:05.344 00:27:05.344 real 0m0.078s 00:27:05.344 user 0m0.022s 00:27:05.344 sys 0m0.019s 00:27:05.344 16:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:27:05.344 16:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:05.344 16:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 ************************************ 00:27:05.344 START TEST rpc_trace_cmd_test 00:27:05.344 ************************************ 00:27:05.344 16:04:09 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:27:05.344 16:04:09 -- rpc/rpc.sh@40 -- # local info 00:27:05.344 16:04:09 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:27:05.344 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.344 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.344 16:04:09 -- rpc/rpc.sh@42 -- # info='{ 00:27:05.344 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61235", 00:27:05.344 "tpoint_group_mask": "0x8", 00:27:05.344 "iscsi_conn": { 00:27:05.344 "mask": "0x2", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "scsi": { 00:27:05.344 "mask": "0x4", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "bdev": { 00:27:05.344 "mask": "0x8", 00:27:05.344 "tpoint_mask": "0xffffffffffffffff" 00:27:05.344 }, 00:27:05.344 "nvmf_rdma": { 00:27:05.344 "mask": "0x10", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "nvmf_tcp": { 00:27:05.344 "mask": "0x20", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "ftl": { 00:27:05.344 "mask": "0x40", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "blobfs": { 00:27:05.344 "mask": "0x80", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "dsa": { 00:27:05.344 "mask": "0x200", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "thread": { 00:27:05.344 "mask": "0x400", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "nvme_pcie": { 00:27:05.344 "mask": "0x800", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "iaa": { 00:27:05.344 "mask": "0x1000", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "nvme_tcp": { 00:27:05.344 "mask": "0x2000", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 }, 00:27:05.344 "bdev_nvme": { 00:27:05.344 "mask": "0x4000", 00:27:05.344 "tpoint_mask": "0x0" 00:27:05.344 } 00:27:05.344 }' 00:27:05.344 16:04:09 -- rpc/rpc.sh@43 -- # jq length 00:27:05.344 16:04:09 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:27:05.344 16:04:09 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:27:05.344 16:04:09 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:27:05.344 16:04:09 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:27:05.344 16:04:09 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:27:05.344 16:04:09 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:27:05.344 16:04:09 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:27:05.344 16:04:09 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:27:05.344 ************************************ 00:27:05.344 END TEST rpc_trace_cmd_test 00:27:05.344 ************************************ 00:27:05.344 16:04:09 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:27:05.344 00:27:05.344 real 0m0.067s 00:27:05.344 user 0m0.026s 00:27:05.344 sys 0m0.035s 00:27:05.344 16:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.344 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:27:05.603 16:04:09 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:27:05.603 16:04:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:05.603 16:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 ************************************ 00:27:05.603 START TEST rpc_daemon_integrity 00:27:05.603 ************************************ 00:27:05.603 16:04:09 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:27:05.603 16:04:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:05.603 16:04:09 -- rpc/rpc.sh@13 -- # jq length 00:27:05.603 16:04:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:05.603 16:04:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:27:05.603 16:04:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:05.603 { 00:27:05.603 "name": "Malloc2", 00:27:05.603 "aliases": [ 00:27:05.603 "73e41b5b-f2e1-4c6a-917a-7c8dbc199ca6" 00:27:05.603 ], 00:27:05.603 "product_name": "Malloc disk", 00:27:05.603 "block_size": 512, 00:27:05.603 "num_blocks": 16384, 00:27:05.603 "uuid": "73e41b5b-f2e1-4c6a-917a-7c8dbc199ca6", 00:27:05.603 "assigned_rate_limits": { 00:27:05.603 "rw_ios_per_sec": 0, 00:27:05.603 "rw_mbytes_per_sec": 0, 00:27:05.603 "r_mbytes_per_sec": 0, 00:27:05.603 "w_mbytes_per_sec": 0 00:27:05.603 }, 00:27:05.603 "claimed": false, 00:27:05.603 "zoned": false, 00:27:05.603 "supported_io_types": { 00:27:05.603 "read": true, 00:27:05.603 "write": true, 00:27:05.603 "unmap": true, 00:27:05.603 "write_zeroes": true, 00:27:05.603 "flush": true, 00:27:05.603 "reset": true, 00:27:05.603 "compare": false, 00:27:05.603 "compare_and_write": false, 00:27:05.603 "abort": true, 00:27:05.603 "nvme_admin": false, 00:27:05.603 "nvme_io": false 00:27:05.603 }, 00:27:05.603 "memory_domains": [ 00:27:05.603 { 00:27:05.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.603 "dma_device_type": 2 00:27:05.603 } 00:27:05.603 ], 00:27:05.603 "driver_specific": {} 00:27:05.603 } 00:27:05.603 ]' 00:27:05.603 16:04:09 -- rpc/rpc.sh@17 -- # jq length 00:27:05.603 16:04:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:05.603 16:04:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 [2024-07-22 16:04:09.729904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:27:05.603 [2024-07-22 16:04:09.730011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.603 [2024-07-22 16:04:09.730050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:27:05.603 [2024-07-22 16:04:09.730070] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.603 [2024-07-22 16:04:09.733206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.603 [2024-07-22 16:04:09.733262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:05.603 Passthru0 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:05.603 { 00:27:05.603 "name": "Malloc2", 00:27:05.603 "aliases": [ 00:27:05.603 "73e41b5b-f2e1-4c6a-917a-7c8dbc199ca6" 00:27:05.603 ], 00:27:05.603 "product_name": "Malloc disk", 00:27:05.603 "block_size": 512, 00:27:05.603 "num_blocks": 16384, 00:27:05.603 "uuid": "73e41b5b-f2e1-4c6a-917a-7c8dbc199ca6", 00:27:05.603 "assigned_rate_limits": { 00:27:05.603 "rw_ios_per_sec": 0, 00:27:05.603 "rw_mbytes_per_sec": 0, 00:27:05.603 "r_mbytes_per_sec": 0, 00:27:05.603 "w_mbytes_per_sec": 0 00:27:05.603 }, 00:27:05.603 "claimed": true, 00:27:05.603 "claim_type": "exclusive_write", 00:27:05.603 "zoned": false, 00:27:05.603 "supported_io_types": { 00:27:05.603 "read": true, 00:27:05.603 "write": true, 00:27:05.603 "unmap": true, 00:27:05.603 "write_zeroes": true, 00:27:05.603 "flush": true, 00:27:05.603 "reset": true, 00:27:05.603 "compare": false, 00:27:05.603 "compare_and_write": false, 00:27:05.603 "abort": true, 00:27:05.603 "nvme_admin": false, 00:27:05.603 "nvme_io": false 00:27:05.603 }, 00:27:05.603 "memory_domains": [ 00:27:05.603 { 00:27:05.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.603 "dma_device_type": 2 00:27:05.603 } 00:27:05.603 ], 00:27:05.603 "driver_specific": {} 00:27:05.603 }, 00:27:05.603 { 00:27:05.603 "name": "Passthru0", 00:27:05.603 "aliases": [ 00:27:05.603 "051e0b35-70fc-57fe-b0be-17317acf4560" 00:27:05.603 ], 00:27:05.603 "product_name": "passthru", 00:27:05.603 "block_size": 512, 00:27:05.603 "num_blocks": 16384, 00:27:05.603 "uuid": "051e0b35-70fc-57fe-b0be-17317acf4560", 00:27:05.603 "assigned_rate_limits": { 00:27:05.603 "rw_ios_per_sec": 0, 00:27:05.603 "rw_mbytes_per_sec": 0, 00:27:05.603 "r_mbytes_per_sec": 0, 00:27:05.603 "w_mbytes_per_sec": 0 00:27:05.603 }, 00:27:05.603 "claimed": false, 00:27:05.603 "zoned": false, 00:27:05.603 "supported_io_types": { 00:27:05.603 "read": true, 00:27:05.603 "write": true, 00:27:05.603 "unmap": true, 00:27:05.603 "write_zeroes": true, 00:27:05.603 "flush": true, 00:27:05.603 "reset": true, 00:27:05.603 "compare": false, 00:27:05.603 "compare_and_write": false, 00:27:05.603 "abort": true, 00:27:05.603 "nvme_admin": false, 00:27:05.603 "nvme_io": false 00:27:05.603 }, 00:27:05.603 "memory_domains": [ 00:27:05.603 { 00:27:05.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.603 "dma_device_type": 2 00:27:05.603 } 00:27:05.603 ], 00:27:05.603 "driver_specific": { 00:27:05.603 "passthru": { 00:27:05.603 "name": "Passthru0", 00:27:05.603 "base_bdev_name": "Malloc2" 00:27:05.603 } 00:27:05.603 } 00:27:05.603 } 00:27:05.603 ]' 00:27:05.603 16:04:09 -- rpc/rpc.sh@21 -- # jq length 00:27:05.603 16:04:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:05.603 16:04:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.603 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.603 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.603 16:04:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:05.603 16:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:05.604 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.604 16:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:05.604 16:04:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:05.604 16:04:09 -- rpc/rpc.sh@26 -- # jq length 00:27:05.604 ************************************ 00:27:05.604 END TEST rpc_daemon_integrity 00:27:05.604 ************************************ 00:27:05.604 16:04:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:05.604 00:27:05.604 real 0m0.179s 00:27:05.604 user 0m0.043s 00:27:05.604 sys 0m0.043s 00:27:05.604 16:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.604 16:04:09 -- common/autotest_common.sh@10 -- # set +x 00:27:05.604 16:04:09 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:05.604 16:04:09 -- rpc/rpc.sh@84 -- # killprocess 61235 00:27:05.604 16:04:09 -- common/autotest_common.sh@926 -- # '[' -z 61235 ']' 00:27:05.604 16:04:09 -- common/autotest_common.sh@930 -- # kill -0 61235 00:27:05.604 16:04:09 -- common/autotest_common.sh@931 -- # uname 00:27:05.862 16:04:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:05.862 16:04:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61235 00:27:05.862 killing process with pid 61235 00:27:05.862 16:04:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:05.862 16:04:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:05.862 16:04:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61235' 00:27:05.862 16:04:09 -- common/autotest_common.sh@945 -- # kill 61235 00:27:05.862 16:04:09 -- common/autotest_common.sh@950 -- # wait 61235 00:27:08.415 00:27:08.415 real 0m5.074s 00:27:08.415 user 0m5.175s 00:27:08.415 sys 0m1.080s 00:27:08.415 16:04:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.415 ************************************ 00:27:08.415 END TEST rpc 00:27:08.415 ************************************ 00:27:08.415 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.415 16:04:12 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:08.415 16:04:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:08.415 16:04:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.415 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.415 ************************************ 00:27:08.415 START TEST rpc_client 00:27:08.415 ************************************ 00:27:08.415 16:04:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:27:08.415 * Looking for test storage... 00:27:08.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:27:08.415 16:04:12 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:27:08.415 OK 00:27:08.415 16:04:12 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:27:08.415 00:27:08.415 real 0m0.146s 00:27:08.415 user 0m0.061s 00:27:08.415 sys 0m0.095s 00:27:08.415 16:04:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:08.415 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.415 ************************************ 00:27:08.415 END TEST rpc_client 00:27:08.415 ************************************ 00:27:08.415 16:04:12 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:08.415 16:04:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:08.415 16:04:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:08.415 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.415 ************************************ 00:27:08.415 START TEST json_config 00:27:08.415 ************************************ 00:27:08.415 16:04:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:27:08.415 16:04:12 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:08.415 16:04:12 -- nvmf/common.sh@7 -- # uname -s 00:27:08.415 16:04:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:08.415 16:04:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:08.415 16:04:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:08.415 16:04:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:08.415 16:04:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:08.415 16:04:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:08.415 16:04:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:08.415 16:04:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:08.415 16:04:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:08.415 16:04:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:08.415 16:04:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71b8af37-fdb9-4e3a-a376-0d434c729595 00:27:08.415 16:04:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=71b8af37-fdb9-4e3a-a376-0d434c729595 00:27:08.415 16:04:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:08.415 16:04:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:08.415 16:04:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:08.415 16:04:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:08.415 16:04:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:08.415 16:04:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.415 16:04:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.415 16:04:12 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.415 16:04:12 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.415 16:04:12 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.415 16:04:12 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.415 16:04:12 -- paths/export.sh@6 -- # export PATH 00:27:08.415 16:04:12 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:08.415 16:04:12 -- nvmf/common.sh@46 -- # : 0 00:27:08.415 16:04:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:08.415 16:04:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:08.415 16:04:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:08.415 16:04:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:08.415 16:04:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:08.415 16:04:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:08.415 16:04:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:08.415 16:04:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:08.415 16:04:12 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:27:08.415 16:04:12 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:27:08.415 16:04:12 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:27:08.415 16:04:12 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:27:08.415 16:04:12 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:27:08.415 16:04:12 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:27:08.415 16:04:12 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:27:08.415 16:04:12 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:27:08.415 16:04:12 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:27:08.415 16:04:12 -- json_config/json_config.sh@32 -- # declare -A app_params 00:27:08.415 16:04:12 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:27:08.415 16:04:12 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:27:08.415 16:04:12 -- json_config/json_config.sh@43 -- # last_event_id=0 00:27:08.674 16:04:12 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:27:08.674 INFO: JSON configuration test init 00:27:08.674 16:04:12 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:27:08.674 16:04:12 -- json_config/json_config.sh@420 -- # json_config_test_init 00:27:08.674 16:04:12 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:27:08.674 16:04:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:08.674 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.674 16:04:12 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:27:08.674 16:04:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:08.674 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.674 16:04:12 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:27:08.674 16:04:12 -- json_config/json_config.sh@98 -- # local app=target 00:27:08.674 16:04:12 -- json_config/json_config.sh@99 -- # shift 00:27:08.674 16:04:12 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:27:08.674 16:04:12 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:27:08.674 16:04:12 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:27:08.674 16:04:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:27:08.674 16:04:12 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:27:08.674 16:04:12 -- json_config/json_config.sh@111 -- # app_pid[$app]=61497 00:27:08.674 Waiting for target to run... 00:27:08.674 16:04:12 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:27:08.674 16:04:12 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:27:08.674 16:04:12 -- json_config/json_config.sh@114 -- # waitforlisten 61497 /var/tmp/spdk_tgt.sock 00:27:08.674 16:04:12 -- common/autotest_common.sh@819 -- # '[' -z 61497 ']' 00:27:08.674 16:04:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:27:08.674 16:04:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:08.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:27:08.674 16:04:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:27:08.674 16:04:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:08.674 16:04:12 -- common/autotest_common.sh@10 -- # set +x 00:27:08.674 [2024-07-22 16:04:12.791542] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:08.674 [2024-07-22 16:04:12.791707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61497 ] 00:27:09.238 [2024-07-22 16:04:13.354360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.496 [2024-07-22 16:04:13.592411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:09.496 [2024-07-22 16:04:13.592697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.496 16:04:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:09.496 16:04:13 -- common/autotest_common.sh@852 -- # return 0 00:27:09.496 00:27:09.496 16:04:13 -- json_config/json_config.sh@115 -- # echo '' 00:27:09.496 16:04:13 -- json_config/json_config.sh@322 -- # create_accel_config 00:27:09.496 16:04:13 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:27:09.496 16:04:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:09.496 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:09.496 16:04:13 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:27:09.496 16:04:13 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:27:09.496 16:04:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:09.496 16:04:13 -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 16:04:13 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:27:09.754 16:04:13 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:27:09.754 16:04:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:27:10.699 16:04:14 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:27:10.699 16:04:14 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:27:10.699 16:04:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.699 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:27:10.699 16:04:14 -- json_config/json_config.sh@48 -- # local ret=0 00:27:10.699 16:04:14 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:27:10.699 16:04:14 -- json_config/json_config.sh@49 -- # local enabled_types 00:27:10.699 16:04:14 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:27:10.699 16:04:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:27:10.699 16:04:14 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:27:10.958 16:04:14 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:27:10.958 16:04:14 -- json_config/json_config.sh@51 -- # local get_types 00:27:10.958 16:04:14 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:27:10.958 16:04:14 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:27:10.958 16:04:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:10.958 16:04:14 -- common/autotest_common.sh@10 -- # set +x 00:27:10.958 16:04:15 -- json_config/json_config.sh@58 -- # return 0 00:27:10.958 16:04:15 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:27:10.958 16:04:15 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:27:10.958 16:04:15 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:27:10.958 16:04:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.958 16:04:15 -- common/autotest_common.sh@10 -- # set +x 00:27:10.958 16:04:15 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:27:10.958 16:04:15 -- json_config/json_config.sh@160 -- # local expected_notifications 00:27:10.958 16:04:15 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:27:10.958 16:04:15 -- json_config/json_config.sh@164 -- # get_notifications 00:27:10.958 16:04:15 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:27:10.958 16:04:15 -- json_config/json_config.sh@64 -- # IFS=: 00:27:10.958 16:04:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:10.958 16:04:15 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:27:10.958 16:04:15 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:27:10.958 16:04:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:27:11.217 16:04:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:27:11.217 16:04:15 -- json_config/json_config.sh@64 -- # IFS=: 00:27:11.217 16:04:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:11.217 16:04:15 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:27:11.217 16:04:15 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:27:11.217 16:04:15 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:27:11.217 16:04:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:27:11.217 Nvme0n1p0 Nvme0n1p1 00:27:11.217 16:04:15 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:27:11.217 16:04:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:27:11.475 [2024-07-22 16:04:15.692420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:11.475 [2024-07-22 16:04:15.692543] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:11.475 00:27:11.475 16:04:15 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:27:11.475 16:04:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:27:11.733 Malloc3 00:27:11.733 16:04:15 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:27:11.733 16:04:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:27:11.991 [2024-07-22 16:04:16.158289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:11.991 [2024-07-22 16:04:16.158419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.991 [2024-07-22 16:04:16.158458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:27:11.991 [2024-07-22 16:04:16.158488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.991 [2024-07-22 16:04:16.161477] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.991 [2024-07-22 16:04:16.161551] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:27:11.991 PTBdevFromMalloc3 00:27:11.991 16:04:16 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:27:11.991 16:04:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:27:12.250 Null0 00:27:12.250 16:04:16 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:27:12.250 16:04:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:27:12.509 Malloc0 00:27:12.509 16:04:16 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:27:12.509 16:04:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:27:12.767 Malloc1 00:27:12.767 16:04:16 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:27:12.767 16:04:16 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:27:13.025 102400+0 records in 00:27:13.025 102400+0 records out 00:27:13.025 104857600 bytes (105 MB, 100 MiB) copied, 0.260012 s, 403 MB/s 00:27:13.025 16:04:17 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:27:13.025 16:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:27:13.284 aio_disk 00:27:13.284 16:04:17 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:27:13.284 16:04:17 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:27:13.284 16:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:27:13.543 4f13252d-0e87-4ba7-a0b6-452e84cd8e7e 00:27:13.543 16:04:17 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:27:13.543 16:04:17 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:27:13.543 16:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:27:13.801 16:04:17 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:27:13.801 16:04:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:27:14.060 16:04:18 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:27:14.060 16:04:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:27:14.340 16:04:18 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:27:14.340 16:04:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:27:14.597 16:04:18 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:27:14.597 16:04:18 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:27:14.597 16:04:18 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 bdev_register:73081d46-471b-4398-88c6-bac233b1defa bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f 00:27:14.597 16:04:18 -- json_config/json_config.sh@70 -- # local events_to_check 00:27:14.597 16:04:18 -- json_config/json_config.sh@71 -- # local recorded_events 00:27:14.597 16:04:18 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:27:14.597 16:04:18 -- json_config/json_config.sh@74 -- # sort 00:27:14.597 16:04:18 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 bdev_register:73081d46-471b-4398-88c6-bac233b1defa bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f 00:27:14.597 16:04:18 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:27:14.597 16:04:18 -- json_config/json_config.sh@75 -- # sort 00:27:14.597 16:04:18 -- json_config/json_config.sh@75 -- # get_notifications 00:27:14.597 16:04:18 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:27:14.597 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.597 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.597 16:04:18 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:27:14.597 16:04:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:27:14.597 16:04:18 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:73081d46-471b-4398-88c6-bac233b1defa 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@65 -- # echo bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # IFS=: 00:27:14.856 16:04:18 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:27:14.856 16:04:18 -- json_config/json_config.sh@77 -- # [[ bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 bdev_register:73081d46-471b-4398-88c6-bac233b1defa bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\0\6\3\0\4\7\5\-\7\0\1\5\-\4\e\b\d\-\b\f\3\c\-\b\c\4\a\e\3\b\e\c\b\8\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\3\0\8\1\d\4\6\-\4\7\1\b\-\4\3\9\8\-\8\8\c\6\-\b\a\c\2\3\3\b\1\d\e\f\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\a\4\8\0\d\9\d\-\e\0\a\a\-\4\7\c\b\-\9\b\d\b\-\4\8\2\2\4\5\0\0\0\5\2\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\8\2\9\0\2\4\b\-\2\9\f\f\-\4\6\5\c\-\b\b\0\9\-\b\4\5\9\c\f\9\e\3\4\a\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:27:14.856 16:04:18 -- json_config/json_config.sh@89 -- # cat 00:27:14.856 16:04:18 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 bdev_register:73081d46-471b-4398-88c6-bac233b1defa bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:27:14.856 Expected events matched: 00:27:14.856 bdev_register:70630475-7015-4ebd-bf3c-bc4ae3becb87 00:27:14.856 bdev_register:73081d46-471b-4398-88c6-bac233b1defa 00:27:14.856 bdev_register:7a480d9d-e0aa-47cb-9bdb-48224500052f 00:27:14.856 bdev_register:9829024b-29ff-465c-bb09-b459cf9e34a4 00:27:14.856 bdev_register:Malloc0 00:27:14.856 bdev_register:Malloc0p0 00:27:14.856 bdev_register:Malloc0p1 00:27:14.856 bdev_register:Malloc0p2 00:27:14.856 bdev_register:Malloc1 00:27:14.856 bdev_register:Malloc3 00:27:14.856 bdev_register:Null0 00:27:14.856 bdev_register:Nvme0n1 00:27:14.857 bdev_register:Nvme0n1p0 00:27:14.857 bdev_register:Nvme0n1p1 00:27:14.857 bdev_register:PTBdevFromMalloc3 00:27:14.857 bdev_register:aio_disk 00:27:14.857 16:04:18 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:27:14.857 16:04:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:14.857 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:27:14.857 16:04:18 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:27:14.857 16:04:18 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:27:14.857 16:04:18 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:27:14.857 16:04:18 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:27:14.857 16:04:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:14.857 16:04:18 -- common/autotest_common.sh@10 -- # set +x 00:27:14.857 16:04:19 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:27:14.857 16:04:19 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:27:14.857 16:04:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:27:15.115 MallocBdevForConfigChangeCheck 00:27:15.115 16:04:19 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:27:15.115 16:04:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:15.115 16:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:15.115 16:04:19 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:27:15.115 16:04:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:27:15.682 INFO: shutting down applications... 00:27:15.682 16:04:19 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:27:15.682 16:04:19 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:27:15.682 16:04:19 -- json_config/json_config.sh@431 -- # json_config_clear target 00:27:15.682 16:04:19 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:27:15.682 16:04:19 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:27:15.682 [2024-07-22 16:04:19.878894] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:27:15.940 Calling clear_vhost_scsi_subsystem 00:27:15.940 Calling clear_iscsi_subsystem 00:27:15.940 Calling clear_vhost_blk_subsystem 00:27:15.940 Calling clear_ublk_subsystem 00:27:15.940 Calling clear_nbd_subsystem 00:27:15.940 Calling clear_nvmf_subsystem 00:27:15.940 Calling clear_bdev_subsystem 00:27:15.940 Calling clear_accel_subsystem 00:27:15.940 Calling clear_iobuf_subsystem 00:27:15.940 Calling clear_sock_subsystem 00:27:15.940 Calling clear_vmd_subsystem 00:27:15.940 Calling clear_scheduler_subsystem 00:27:15.940 16:04:20 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:27:15.940 16:04:20 -- json_config/json_config.sh@396 -- # count=100 00:27:15.940 16:04:20 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:27:15.940 16:04:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:27:15.940 16:04:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:27:15.940 16:04:20 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:27:16.199 16:04:20 -- json_config/json_config.sh@398 -- # break 00:27:16.199 16:04:20 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:27:16.199 16:04:20 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:27:16.199 16:04:20 -- json_config/json_config.sh@120 -- # local app=target 00:27:16.199 16:04:20 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:27:16.199 16:04:20 -- json_config/json_config.sh@124 -- # [[ -n 61497 ]] 00:27:16.199 16:04:20 -- json_config/json_config.sh@127 -- # kill -SIGINT 61497 00:27:16.199 16:04:20 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:27:16.199 16:04:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:27:16.199 16:04:20 -- json_config/json_config.sh@130 -- # kill -0 61497 00:27:16.199 16:04:20 -- json_config/json_config.sh@134 -- # sleep 0.5 00:27:16.779 16:04:20 -- json_config/json_config.sh@129 -- # (( i++ )) 00:27:16.779 16:04:20 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:27:16.779 16:04:20 -- json_config/json_config.sh@130 -- # kill -0 61497 00:27:16.779 16:04:20 -- json_config/json_config.sh@134 -- # sleep 0.5 00:27:17.346 16:04:21 -- json_config/json_config.sh@129 -- # (( i++ )) 00:27:17.346 16:04:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:27:17.346 16:04:21 -- json_config/json_config.sh@130 -- # kill -0 61497 00:27:17.346 16:04:21 -- json_config/json_config.sh@134 -- # sleep 0.5 00:27:17.913 16:04:21 -- json_config/json_config.sh@129 -- # (( i++ )) 00:27:17.913 16:04:21 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:27:17.913 16:04:21 -- json_config/json_config.sh@130 -- # kill -0 61497 00:27:17.913 16:04:21 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:27:17.913 16:04:21 -- json_config/json_config.sh@132 -- # break 00:27:17.913 16:04:21 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:27:17.913 SPDK target shutdown done 00:27:17.913 16:04:21 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:27:17.913 INFO: relaunching applications... 00:27:17.913 16:04:21 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:27:17.913 16:04:21 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:17.913 16:04:21 -- json_config/json_config.sh@98 -- # local app=target 00:27:17.913 16:04:21 -- json_config/json_config.sh@99 -- # shift 00:27:17.913 16:04:21 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:27:17.913 16:04:21 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:27:17.913 16:04:21 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:27:17.913 16:04:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:27:17.913 16:04:21 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:27:17.913 16:04:21 -- json_config/json_config.sh@111 -- # app_pid[$app]=61750 00:27:17.913 16:04:21 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:17.913 Waiting for target to run... 00:27:17.913 16:04:21 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:27:17.913 16:04:21 -- json_config/json_config.sh@114 -- # waitforlisten 61750 /var/tmp/spdk_tgt.sock 00:27:17.913 16:04:21 -- common/autotest_common.sh@819 -- # '[' -z 61750 ']' 00:27:17.913 16:04:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:27:17.913 16:04:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:27:17.913 16:04:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:27:17.913 16:04:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:17.913 16:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:17.913 [2024-07-22 16:04:22.077713] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:17.913 [2024-07-22 16:04:22.077974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61750 ] 00:27:18.480 [2024-07-22 16:04:22.676155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.739 [2024-07-22 16:04:22.947823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:18.739 [2024-07-22 16:04:22.948155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.674 [2024-07-22 16:04:23.654142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:27:19.674 [2024-07-22 16:04:23.654249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:27:19.674 [2024-07-22 16:04:23.662078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:19.674 [2024-07-22 16:04:23.662131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:27:19.674 [2024-07-22 16:04:23.670102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:19.674 [2024-07-22 16:04:23.670150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:27:19.674 [2024-07-22 16:04:23.670168] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:27:19.674 [2024-07-22 16:04:23.765726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:27:19.674 [2024-07-22 16:04:23.765819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.674 [2024-07-22 16:04:23.765847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:27:19.674 [2024-07-22 16:04:23.765861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.674 [2024-07-22 16:04:23.766399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.674 [2024-07-22 16:04:23.766437] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:27:20.243 16:04:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:20.243 16:04:24 -- common/autotest_common.sh@852 -- # return 0 00:27:20.243 16:04:24 -- json_config/json_config.sh@115 -- # echo '' 00:27:20.243 00:27:20.243 16:04:24 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:27:20.243 INFO: Checking if target configuration is the same... 00:27:20.243 16:04:24 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:27:20.243 16:04:24 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:20.243 16:04:24 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:27:20.243 16:04:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:27:20.243 + '[' 2 -ne 2 ']' 00:27:20.243 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:27:20.243 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:27:20.243 + rootdir=/home/vagrant/spdk_repo/spdk 00:27:20.243 +++ basename /dev/fd/62 00:27:20.243 ++ mktemp /tmp/62.XXX 00:27:20.243 + tmp_file_1=/tmp/62.bf5 00:27:20.243 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:20.243 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:27:20.243 + tmp_file_2=/tmp/spdk_tgt_config.json.N1k 00:27:20.243 + ret=0 00:27:20.243 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:27:20.810 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:27:20.810 + diff -u /tmp/62.bf5 /tmp/spdk_tgt_config.json.N1k 00:27:20.810 INFO: JSON config files are the same 00:27:20.810 + echo 'INFO: JSON config files are the same' 00:27:20.810 + rm /tmp/62.bf5 /tmp/spdk_tgt_config.json.N1k 00:27:20.810 + exit 0 00:27:20.810 16:04:24 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:27:20.810 INFO: changing configuration and checking if this can be detected... 00:27:20.810 16:04:24 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:27:20.810 16:04:24 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:27:20.810 16:04:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:27:21.069 16:04:25 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:27:21.069 16:04:25 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:21.069 16:04:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:27:21.069 + '[' 2 -ne 2 ']' 00:27:21.069 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:27:21.069 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:27:21.069 + rootdir=/home/vagrant/spdk_repo/spdk 00:27:21.069 +++ basename /dev/fd/62 00:27:21.069 ++ mktemp /tmp/62.XXX 00:27:21.069 + tmp_file_1=/tmp/62.V8O 00:27:21.069 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:21.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:27:21.069 + tmp_file_2=/tmp/spdk_tgt_config.json.pcS 00:27:21.069 + ret=0 00:27:21.069 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:27:21.636 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:27:21.636 + diff -u /tmp/62.V8O /tmp/spdk_tgt_config.json.pcS 00:27:21.636 + ret=1 00:27:21.636 + echo '=== Start of file: /tmp/62.V8O ===' 00:27:21.636 + cat /tmp/62.V8O 00:27:21.636 + echo '=== End of file: /tmp/62.V8O ===' 00:27:21.636 + echo '' 00:27:21.636 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pcS ===' 00:27:21.636 + cat /tmp/spdk_tgt_config.json.pcS 00:27:21.636 + echo '=== End of file: /tmp/spdk_tgt_config.json.pcS ===' 00:27:21.636 + echo '' 00:27:21.636 + rm /tmp/62.V8O /tmp/spdk_tgt_config.json.pcS 00:27:21.636 + exit 1 00:27:21.636 INFO: configuration change detected. 00:27:21.636 16:04:25 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:27:21.636 16:04:25 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:27:21.636 16:04:25 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:27:21.636 16:04:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.636 16:04:25 -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 16:04:25 -- json_config/json_config.sh@360 -- # local ret=0 00:27:21.636 16:04:25 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:27:21.636 16:04:25 -- json_config/json_config.sh@370 -- # [[ -n 61750 ]] 00:27:21.636 16:04:25 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:27:21.636 16:04:25 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:27:21.636 16:04:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.636 16:04:25 -- common/autotest_common.sh@10 -- # set +x 00:27:21.636 16:04:25 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:27:21.636 16:04:25 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:27:21.636 16:04:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:27:21.894 16:04:25 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:27:21.894 16:04:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:27:21.894 16:04:26 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:27:21.894 16:04:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:27:22.153 16:04:26 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:27:22.153 16:04:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:27:22.411 16:04:26 -- json_config/json_config.sh@246 -- # uname -s 00:27:22.411 16:04:26 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:27:22.411 16:04:26 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:27:22.411 16:04:26 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:27:22.411 16:04:26 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:27:22.411 16:04:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.411 16:04:26 -- common/autotest_common.sh@10 -- # set +x 00:27:22.411 16:04:26 -- json_config/json_config.sh@376 -- # killprocess 61750 00:27:22.411 16:04:26 -- common/autotest_common.sh@926 -- # '[' -z 61750 ']' 00:27:22.411 16:04:26 -- common/autotest_common.sh@930 -- # kill -0 61750 00:27:22.411 16:04:26 -- common/autotest_common.sh@931 -- # uname 00:27:22.411 16:04:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:22.411 16:04:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 61750 00:27:22.411 killing process with pid 61750 00:27:22.411 16:04:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:22.411 16:04:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:22.411 16:04:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 61750' 00:27:22.411 16:04:26 -- common/autotest_common.sh@945 -- # kill 61750 00:27:22.411 16:04:26 -- common/autotest_common.sh@950 -- # wait 61750 00:27:23.802 16:04:27 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:27:23.802 16:04:27 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:27:23.802 16:04:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.802 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:23.802 INFO: Success 00:27:23.802 16:04:27 -- json_config/json_config.sh@381 -- # return 0 00:27:23.802 16:04:27 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:27:23.802 00:27:23.802 real 0m15.275s 00:27:23.802 user 0m20.763s 00:27:23.802 sys 0m3.071s 00:27:23.802 ************************************ 00:27:23.802 END TEST json_config 00:27:23.802 ************************************ 00:27:23.802 16:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.802 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:23.802 16:04:27 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:23.802 16:04:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:23.802 16:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:23.802 16:04:27 -- common/autotest_common.sh@10 -- # set +x 00:27:23.802 ************************************ 00:27:23.802 START TEST json_config_extra_key 00:27:23.802 ************************************ 00:27:23.802 16:04:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:27:23.802 16:04:27 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:23.802 16:04:27 -- nvmf/common.sh@7 -- # uname -s 00:27:23.802 16:04:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.802 16:04:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.802 16:04:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.802 16:04:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.802 16:04:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.802 16:04:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.802 16:04:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.802 16:04:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.802 16:04:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.802 16:04:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.802 16:04:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71b8af37-fdb9-4e3a-a376-0d434c729595 00:27:23.802 16:04:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=71b8af37-fdb9-4e3a-a376-0d434c729595 00:27:23.802 16:04:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.802 16:04:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.802 16:04:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:23.802 16:04:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:23.802 16:04:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.802 16:04:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.802 16:04:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.802 16:04:28 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:23.802 16:04:28 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:23.802 16:04:28 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:23.802 16:04:28 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:23.802 16:04:28 -- paths/export.sh@6 -- # export PATH 00:27:23.802 16:04:28 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:23.802 16:04:28 -- nvmf/common.sh@46 -- # : 0 00:27:23.802 16:04:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:23.802 16:04:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:23.802 16:04:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:23.802 16:04:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.802 16:04:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.802 16:04:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:23.802 16:04:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:23.802 16:04:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:27:23.802 INFO: launching applications... 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@25 -- # shift 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:27:23.802 Waiting for target to run... 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=61937 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 61937 /var/tmp/spdk_tgt.sock 00:27:23.802 16:04:28 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:27:23.802 16:04:28 -- common/autotest_common.sh@819 -- # '[' -z 61937 ']' 00:27:23.802 16:04:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:27:23.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:27:23.802 16:04:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:23.802 16:04:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:27:23.802 16:04:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:23.802 16:04:28 -- common/autotest_common.sh@10 -- # set +x 00:27:24.061 [2024-07-22 16:04:28.086478] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:24.061 [2024-07-22 16:04:28.086913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:27:24.628 [2024-07-22 16:04:28.642976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.886 [2024-07-22 16:04:28.903362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:24.886 [2024-07-22 16:04:28.903800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.821 00:27:25.821 INFO: shutting down applications... 00:27:25.821 16:04:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:25.821 16:04:29 -- common/autotest_common.sh@852 -- # return 0 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 61937 ]] 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 61937 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:25.821 16:04:29 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:26.079 16:04:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:26.079 16:04:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:26.079 16:04:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:26.079 16:04:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:26.697 16:04:30 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:26.697 16:04:30 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:26.697 16:04:30 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:26.697 16:04:30 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:27.262 16:04:31 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:27.262 16:04:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:27.262 16:04:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:27.262 16:04:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:27.520 16:04:31 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:27.520 16:04:31 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:27.520 16:04:31 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:27.520 16:04:31 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:28.086 16:04:32 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:28.086 16:04:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:28.086 16:04:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:28.086 16:04:32 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61937 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@52 -- # break 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:27:28.678 SPDK target shutdown done 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:27:28.678 Success 00:27:28.678 16:04:32 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:27:28.678 ************************************ 00:27:28.679 END TEST json_config_extra_key 00:27:28.679 ************************************ 00:27:28.679 00:27:28.679 real 0m4.823s 00:27:28.679 user 0m4.441s 00:27:28.679 sys 0m0.877s 00:27:28.679 16:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.679 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.679 16:04:32 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:28.679 16:04:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:28.679 16:04:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.679 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.679 ************************************ 00:27:28.679 START TEST alias_rpc 00:27:28.679 ************************************ 00:27:28.679 16:04:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:27:28.679 * Looking for test storage... 00:27:28.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:27:28.679 16:04:32 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:27:28.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.679 16:04:32 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62042 00:27:28.679 16:04:32 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62042 00:27:28.679 16:04:32 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:28.679 16:04:32 -- common/autotest_common.sh@819 -- # '[' -z 62042 ']' 00:27:28.679 16:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.679 16:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:28.679 16:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.679 16:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:28.679 16:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.944 [2024-07-22 16:04:32.970053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:28.944 [2024-07-22 16:04:32.970258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62042 ] 00:27:28.944 [2024-07-22 16:04:33.144123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.203 [2024-07-22 16:04:33.422488] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:29.203 [2024-07-22 16:04:33.422807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.577 16:04:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.577 16:04:34 -- common/autotest_common.sh@852 -- # return 0 00:27:30.577 16:04:34 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:27:30.835 16:04:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62042 00:27:30.835 16:04:34 -- common/autotest_common.sh@926 -- # '[' -z 62042 ']' 00:27:30.835 16:04:34 -- common/autotest_common.sh@930 -- # kill -0 62042 00:27:30.835 16:04:34 -- common/autotest_common.sh@931 -- # uname 00:27:30.835 16:04:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:30.835 16:04:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62042 00:27:30.835 killing process with pid 62042 00:27:30.835 16:04:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:30.835 16:04:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:30.835 16:04:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62042' 00:27:30.835 16:04:34 -- common/autotest_common.sh@945 -- # kill 62042 00:27:30.835 16:04:34 -- common/autotest_common.sh@950 -- # wait 62042 00:27:33.367 ************************************ 00:27:33.367 END TEST alias_rpc 00:27:33.367 ************************************ 00:27:33.367 00:27:33.368 real 0m4.600s 00:27:33.368 user 0m4.666s 00:27:33.368 sys 0m0.808s 00:27:33.368 16:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.368 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:27:33.368 16:04:37 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:27:33.368 16:04:37 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:33.368 16:04:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.368 16:04:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.368 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:27:33.368 ************************************ 00:27:33.368 START TEST spdkcli_tcp 00:27:33.368 ************************************ 00:27:33.368 16:04:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:27:33.368 * Looking for test storage... 00:27:33.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:27:33.368 16:04:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:27:33.368 16:04:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:27:33.368 16:04:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:33.368 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62146 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:27:33.368 16:04:37 -- spdkcli/tcp.sh@27 -- # waitforlisten 62146 00:27:33.368 16:04:37 -- common/autotest_common.sh@819 -- # '[' -z 62146 ']' 00:27:33.368 16:04:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.368 16:04:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:33.368 16:04:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.368 16:04:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:33.368 16:04:37 -- common/autotest_common.sh@10 -- # set +x 00:27:33.368 [2024-07-22 16:04:37.615564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:33.368 [2024-07-22 16:04:37.615706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62146 ] 00:27:33.627 [2024-07-22 16:04:37.789067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:33.885 [2024-07-22 16:04:38.074903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:33.885 [2024-07-22 16:04:38.075312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.885 [2024-07-22 16:04:38.075332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.261 16:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:35.261 16:04:39 -- common/autotest_common.sh@852 -- # return 0 00:27:35.261 16:04:39 -- spdkcli/tcp.sh@31 -- # socat_pid=62172 00:27:35.261 16:04:39 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:27:35.261 16:04:39 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:27:35.261 [ 00:27:35.261 "spdk_get_version", 00:27:35.261 "rpc_get_methods", 00:27:35.261 "trace_get_info", 00:27:35.261 "trace_get_tpoint_group_mask", 00:27:35.261 "trace_disable_tpoint_group", 00:27:35.261 "trace_enable_tpoint_group", 00:27:35.261 "trace_clear_tpoint_mask", 00:27:35.261 "trace_set_tpoint_mask", 00:27:35.261 "framework_get_pci_devices", 00:27:35.261 "framework_get_config", 00:27:35.261 "framework_get_subsystems", 00:27:35.261 "iobuf_get_stats", 00:27:35.261 "iobuf_set_options", 00:27:35.261 "sock_set_default_impl", 00:27:35.261 "sock_impl_set_options", 00:27:35.261 "sock_impl_get_options", 00:27:35.261 "vmd_rescan", 00:27:35.261 "vmd_remove_device", 00:27:35.261 "vmd_enable", 00:27:35.261 "accel_get_stats", 00:27:35.261 "accel_set_options", 00:27:35.261 "accel_set_driver", 00:27:35.261 "accel_crypto_key_destroy", 00:27:35.261 "accel_crypto_keys_get", 00:27:35.261 "accel_crypto_key_create", 00:27:35.261 "accel_assign_opc", 00:27:35.261 "accel_get_module_info", 00:27:35.261 "accel_get_opc_assignments", 00:27:35.261 "notify_get_notifications", 00:27:35.261 "notify_get_types", 00:27:35.261 "bdev_get_histogram", 00:27:35.261 "bdev_enable_histogram", 00:27:35.261 "bdev_set_qos_limit", 00:27:35.261 "bdev_set_qd_sampling_period", 00:27:35.261 "bdev_get_bdevs", 00:27:35.261 "bdev_reset_iostat", 00:27:35.261 "bdev_get_iostat", 00:27:35.261 "bdev_examine", 00:27:35.261 "bdev_wait_for_examine", 00:27:35.261 "bdev_set_options", 00:27:35.261 "scsi_get_devices", 00:27:35.261 "thread_set_cpumask", 00:27:35.261 "framework_get_scheduler", 00:27:35.261 "framework_set_scheduler", 00:27:35.261 "framework_get_reactors", 00:27:35.261 "thread_get_io_channels", 00:27:35.261 "thread_get_pollers", 00:27:35.261 "thread_get_stats", 00:27:35.261 "framework_monitor_context_switch", 00:27:35.261 "spdk_kill_instance", 00:27:35.261 "log_enable_timestamps", 00:27:35.261 "log_get_flags", 00:27:35.261 "log_clear_flag", 00:27:35.261 "log_set_flag", 00:27:35.261 "log_get_level", 00:27:35.261 "log_set_level", 00:27:35.261 "log_get_print_level", 00:27:35.261 "log_set_print_level", 00:27:35.262 "framework_enable_cpumask_locks", 00:27:35.262 "framework_disable_cpumask_locks", 00:27:35.262 "framework_wait_init", 00:27:35.262 "framework_start_init", 00:27:35.262 "virtio_blk_create_transport", 00:27:35.262 "virtio_blk_get_transports", 00:27:35.262 "vhost_controller_set_coalescing", 00:27:35.262 "vhost_get_controllers", 00:27:35.262 "vhost_delete_controller", 00:27:35.262 "vhost_create_blk_controller", 00:27:35.262 "vhost_scsi_controller_remove_target", 00:27:35.262 "vhost_scsi_controller_add_target", 00:27:35.262 "vhost_start_scsi_controller", 00:27:35.262 "vhost_create_scsi_controller", 00:27:35.262 "ublk_recover_disk", 00:27:35.262 "ublk_get_disks", 00:27:35.262 "ublk_stop_disk", 00:27:35.262 "ublk_start_disk", 00:27:35.262 "ublk_destroy_target", 00:27:35.262 "ublk_create_target", 00:27:35.262 "nbd_get_disks", 00:27:35.262 "nbd_stop_disk", 00:27:35.262 "nbd_start_disk", 00:27:35.262 "env_dpdk_get_mem_stats", 00:27:35.262 "nvmf_subsystem_get_listeners", 00:27:35.262 "nvmf_subsystem_get_qpairs", 00:27:35.262 "nvmf_subsystem_get_controllers", 00:27:35.262 "nvmf_get_stats", 00:27:35.262 "nvmf_get_transports", 00:27:35.262 "nvmf_create_transport", 00:27:35.262 "nvmf_get_targets", 00:27:35.262 "nvmf_delete_target", 00:27:35.262 "nvmf_create_target", 00:27:35.262 "nvmf_subsystem_allow_any_host", 00:27:35.262 "nvmf_subsystem_remove_host", 00:27:35.262 "nvmf_subsystem_add_host", 00:27:35.262 "nvmf_subsystem_remove_ns", 00:27:35.262 "nvmf_subsystem_add_ns", 00:27:35.262 "nvmf_subsystem_listener_set_ana_state", 00:27:35.262 "nvmf_discovery_get_referrals", 00:27:35.262 "nvmf_discovery_remove_referral", 00:27:35.262 "nvmf_discovery_add_referral", 00:27:35.262 "nvmf_subsystem_remove_listener", 00:27:35.262 "nvmf_subsystem_add_listener", 00:27:35.262 "nvmf_delete_subsystem", 00:27:35.262 "nvmf_create_subsystem", 00:27:35.262 "nvmf_get_subsystems", 00:27:35.262 "nvmf_set_crdt", 00:27:35.262 "nvmf_set_config", 00:27:35.262 "nvmf_set_max_subsystems", 00:27:35.262 "iscsi_set_options", 00:27:35.262 "iscsi_get_auth_groups", 00:27:35.262 "iscsi_auth_group_remove_secret", 00:27:35.262 "iscsi_auth_group_add_secret", 00:27:35.262 "iscsi_delete_auth_group", 00:27:35.262 "iscsi_create_auth_group", 00:27:35.262 "iscsi_set_discovery_auth", 00:27:35.262 "iscsi_get_options", 00:27:35.262 "iscsi_target_node_request_logout", 00:27:35.262 "iscsi_target_node_set_redirect", 00:27:35.262 "iscsi_target_node_set_auth", 00:27:35.262 "iscsi_target_node_add_lun", 00:27:35.262 "iscsi_get_connections", 00:27:35.262 "iscsi_portal_group_set_auth", 00:27:35.262 "iscsi_start_portal_group", 00:27:35.262 "iscsi_delete_portal_group", 00:27:35.262 "iscsi_create_portal_group", 00:27:35.262 "iscsi_get_portal_groups", 00:27:35.262 "iscsi_delete_target_node", 00:27:35.262 "iscsi_target_node_remove_pg_ig_maps", 00:27:35.262 "iscsi_target_node_add_pg_ig_maps", 00:27:35.262 "iscsi_create_target_node", 00:27:35.262 "iscsi_get_target_nodes", 00:27:35.262 "iscsi_delete_initiator_group", 00:27:35.262 "iscsi_initiator_group_remove_initiators", 00:27:35.262 "iscsi_initiator_group_add_initiators", 00:27:35.262 "iscsi_create_initiator_group", 00:27:35.262 "iscsi_get_initiator_groups", 00:27:35.262 "iaa_scan_accel_module", 00:27:35.262 "dsa_scan_accel_module", 00:27:35.262 "ioat_scan_accel_module", 00:27:35.262 "accel_error_inject_error", 00:27:35.262 "bdev_iscsi_delete", 00:27:35.262 "bdev_iscsi_create", 00:27:35.262 "bdev_iscsi_set_options", 00:27:35.262 "bdev_virtio_attach_controller", 00:27:35.262 "bdev_virtio_scsi_get_devices", 00:27:35.262 "bdev_virtio_detach_controller", 00:27:35.262 "bdev_virtio_blk_set_hotplug", 00:27:35.262 "bdev_ftl_set_property", 00:27:35.262 "bdev_ftl_get_properties", 00:27:35.262 "bdev_ftl_get_stats", 00:27:35.262 "bdev_ftl_unmap", 00:27:35.262 "bdev_ftl_unload", 00:27:35.262 "bdev_ftl_delete", 00:27:35.262 "bdev_ftl_load", 00:27:35.262 "bdev_ftl_create", 00:27:35.262 "bdev_aio_delete", 00:27:35.262 "bdev_aio_rescan", 00:27:35.262 "bdev_aio_create", 00:27:35.262 "blobfs_create", 00:27:35.262 "blobfs_detect", 00:27:35.262 "blobfs_set_cache_size", 00:27:35.262 "bdev_zone_block_delete", 00:27:35.262 "bdev_zone_block_create", 00:27:35.262 "bdev_delay_delete", 00:27:35.262 "bdev_delay_create", 00:27:35.262 "bdev_delay_update_latency", 00:27:35.262 "bdev_split_delete", 00:27:35.262 "bdev_split_create", 00:27:35.262 "bdev_error_inject_error", 00:27:35.262 "bdev_error_delete", 00:27:35.262 "bdev_error_create", 00:27:35.262 "bdev_raid_set_options", 00:27:35.262 "bdev_raid_remove_base_bdev", 00:27:35.262 "bdev_raid_add_base_bdev", 00:27:35.262 "bdev_raid_delete", 00:27:35.262 "bdev_raid_create", 00:27:35.262 "bdev_raid_get_bdevs", 00:27:35.262 "bdev_lvol_grow_lvstore", 00:27:35.262 "bdev_lvol_get_lvols", 00:27:35.262 "bdev_lvol_get_lvstores", 00:27:35.262 "bdev_lvol_delete", 00:27:35.262 "bdev_lvol_set_read_only", 00:27:35.262 "bdev_lvol_resize", 00:27:35.262 "bdev_lvol_decouple_parent", 00:27:35.262 "bdev_lvol_inflate", 00:27:35.262 "bdev_lvol_rename", 00:27:35.262 "bdev_lvol_clone_bdev", 00:27:35.262 "bdev_lvol_clone", 00:27:35.262 "bdev_lvol_snapshot", 00:27:35.262 "bdev_lvol_create", 00:27:35.262 "bdev_lvol_delete_lvstore", 00:27:35.262 "bdev_lvol_rename_lvstore", 00:27:35.262 "bdev_lvol_create_lvstore", 00:27:35.262 "bdev_passthru_delete", 00:27:35.262 "bdev_passthru_create", 00:27:35.262 "bdev_nvme_cuse_unregister", 00:27:35.262 "bdev_nvme_cuse_register", 00:27:35.262 "bdev_opal_new_user", 00:27:35.262 "bdev_opal_set_lock_state", 00:27:35.262 "bdev_opal_delete", 00:27:35.262 "bdev_opal_get_info", 00:27:35.262 "bdev_opal_create", 00:27:35.262 "bdev_nvme_opal_revert", 00:27:35.262 "bdev_nvme_opal_init", 00:27:35.262 "bdev_nvme_send_cmd", 00:27:35.262 "bdev_nvme_get_path_iostat", 00:27:35.262 "bdev_nvme_get_mdns_discovery_info", 00:27:35.262 "bdev_nvme_stop_mdns_discovery", 00:27:35.262 "bdev_nvme_start_mdns_discovery", 00:27:35.262 "bdev_nvme_set_multipath_policy", 00:27:35.262 "bdev_nvme_set_preferred_path", 00:27:35.262 "bdev_nvme_get_io_paths", 00:27:35.262 "bdev_nvme_remove_error_injection", 00:27:35.262 "bdev_nvme_add_error_injection", 00:27:35.262 "bdev_nvme_get_discovery_info", 00:27:35.262 "bdev_nvme_stop_discovery", 00:27:35.262 "bdev_nvme_start_discovery", 00:27:35.262 "bdev_nvme_get_controller_health_info", 00:27:35.262 "bdev_nvme_disable_controller", 00:27:35.262 "bdev_nvme_enable_controller", 00:27:35.262 "bdev_nvme_reset_controller", 00:27:35.262 "bdev_nvme_get_transport_statistics", 00:27:35.262 "bdev_nvme_apply_firmware", 00:27:35.262 "bdev_nvme_detach_controller", 00:27:35.262 "bdev_nvme_get_controllers", 00:27:35.262 "bdev_nvme_attach_controller", 00:27:35.262 "bdev_nvme_set_hotplug", 00:27:35.262 "bdev_nvme_set_options", 00:27:35.262 "bdev_null_resize", 00:27:35.262 "bdev_null_delete", 00:27:35.262 "bdev_null_create", 00:27:35.262 "bdev_malloc_delete", 00:27:35.262 "bdev_malloc_create" 00:27:35.262 ] 00:27:35.520 16:04:39 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:27:35.520 16:04:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:35.521 16:04:39 -- common/autotest_common.sh@10 -- # set +x 00:27:35.521 16:04:39 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:35.521 16:04:39 -- spdkcli/tcp.sh@38 -- # killprocess 62146 00:27:35.521 16:04:39 -- common/autotest_common.sh@926 -- # '[' -z 62146 ']' 00:27:35.521 16:04:39 -- common/autotest_common.sh@930 -- # kill -0 62146 00:27:35.521 16:04:39 -- common/autotest_common.sh@931 -- # uname 00:27:35.521 16:04:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:35.521 16:04:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62146 00:27:35.521 killing process with pid 62146 00:27:35.521 16:04:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:35.521 16:04:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:35.521 16:04:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62146' 00:27:35.521 16:04:39 -- common/autotest_common.sh@945 -- # kill 62146 00:27:35.521 16:04:39 -- common/autotest_common.sh@950 -- # wait 62146 00:27:38.051 ************************************ 00:27:38.051 END TEST spdkcli_tcp 00:27:38.051 ************************************ 00:27:38.051 00:27:38.051 real 0m4.683s 00:27:38.051 user 0m8.370s 00:27:38.051 sys 0m0.807s 00:27:38.051 16:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.051 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:27:38.051 16:04:42 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:38.051 16:04:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:38.051 16:04:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:38.051 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:27:38.051 ************************************ 00:27:38.051 START TEST dpdk_mem_utility 00:27:38.051 ************************************ 00:27:38.051 16:04:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:27:38.051 * Looking for test storage... 00:27:38.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:27:38.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.051 16:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:38.051 16:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62268 00:27:38.051 16:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:38.051 16:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62268 00:27:38.051 16:04:42 -- common/autotest_common.sh@819 -- # '[' -z 62268 ']' 00:27:38.051 16:04:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.051 16:04:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.051 16:04:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.051 16:04:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.051 16:04:42 -- common/autotest_common.sh@10 -- # set +x 00:27:38.309 [2024-07-22 16:04:42.388437] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:38.309 [2024-07-22 16:04:42.388747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:27:38.309 [2024-07-22 16:04:42.572356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.876 [2024-07-22 16:04:42.879810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:38.876 [2024-07-22 16:04:42.880092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.807 16:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.807 16:04:44 -- common/autotest_common.sh@852 -- # return 0 00:27:39.807 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:27:39.807 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:27:39.807 16:04:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.807 16:04:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.066 { 00:27:40.066 "filename": "/tmp/spdk_mem_dump.txt" 00:27:40.066 } 00:27:40.066 16:04:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.066 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:27:40.066 DPDK memory size 820.000000 MiB in 1 heap(s) 00:27:40.066 1 heaps totaling size 820.000000 MiB 00:27:40.066 size: 820.000000 MiB heap id: 0 00:27:40.066 end heaps---------- 00:27:40.066 8 mempools totaling size 598.116089 MiB 00:27:40.066 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:27:40.066 size: 158.602051 MiB name: PDU_data_out_Pool 00:27:40.066 size: 84.521057 MiB name: bdev_io_62268 00:27:40.066 size: 51.011292 MiB name: evtpool_62268 00:27:40.066 size: 50.003479 MiB name: msgpool_62268 00:27:40.066 size: 21.763794 MiB name: PDU_Pool 00:27:40.066 size: 19.513306 MiB name: SCSI_TASK_Pool 00:27:40.066 size: 0.026123 MiB name: Session_Pool 00:27:40.066 end mempools------- 00:27:40.066 6 memzones totaling size 4.142822 MiB 00:27:40.066 size: 1.000366 MiB name: RG_ring_0_62268 00:27:40.066 size: 1.000366 MiB name: RG_ring_1_62268 00:27:40.066 size: 1.000366 MiB name: RG_ring_4_62268 00:27:40.066 size: 1.000366 MiB name: RG_ring_5_62268 00:27:40.066 size: 0.125366 MiB name: RG_ring_2_62268 00:27:40.066 size: 0.015991 MiB name: RG_ring_3_62268 00:27:40.066 end memzones------- 00:27:40.066 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:27:40.066 heap id: 0 total size: 820.000000 MiB number of busy elements: 303 number of free elements: 18 00:27:40.066 list of free elements. size: 18.450806 MiB 00:27:40.066 element at address: 0x200000400000 with size: 1.999451 MiB 00:27:40.066 element at address: 0x200000800000 with size: 1.996887 MiB 00:27:40.066 element at address: 0x200007000000 with size: 1.995972 MiB 00:27:40.066 element at address: 0x20000b200000 with size: 1.995972 MiB 00:27:40.066 element at address: 0x200019100040 with size: 0.999939 MiB 00:27:40.066 element at address: 0x200019500040 with size: 0.999939 MiB 00:27:40.066 element at address: 0x200019600000 with size: 0.999084 MiB 00:27:40.066 element at address: 0x200003e00000 with size: 0.996094 MiB 00:27:40.066 element at address: 0x200032200000 with size: 0.994324 MiB 00:27:40.066 element at address: 0x200018e00000 with size: 0.959656 MiB 00:27:40.066 element at address: 0x200019900040 with size: 0.936401 MiB 00:27:40.066 element at address: 0x200000200000 with size: 0.829224 MiB 00:27:40.066 element at address: 0x20001b000000 with size: 0.564392 MiB 00:27:40.066 element at address: 0x200019200000 with size: 0.487976 MiB 00:27:40.066 element at address: 0x200019a00000 with size: 0.485413 MiB 00:27:40.066 element at address: 0x200013800000 with size: 0.467651 MiB 00:27:40.066 element at address: 0x200028400000 with size: 0.390442 MiB 00:27:40.066 element at address: 0x200003a00000 with size: 0.351990 MiB 00:27:40.066 list of standard malloc elements. size: 199.284790 MiB 00:27:40.066 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:27:40.066 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:27:40.066 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:27:40.066 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:27:40.066 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:27:40.066 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:27:40.066 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:27:40.066 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:27:40.066 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:27:40.066 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:27:40.066 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:27:40.066 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:27:40.066 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003aff980 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003affa80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200003eff000 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013877b80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013877c80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013877d80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013877e80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013877f80 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878080 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878180 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878280 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878380 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878480 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200013878580 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:27:40.067 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:27:40.068 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:27:40.068 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x200019abc680 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:27:40.068 element at address: 0x200028463f40 with size: 0.000244 MiB 00:27:40.068 element at address: 0x200028464040 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846af80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b080 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b180 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b280 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b380 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b480 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b580 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b680 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b780 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b880 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846b980 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846be80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846c080 with size: 0.000244 MiB 00:27:40.068 element at address: 0x20002846c180 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c280 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c380 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c480 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c580 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c680 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c780 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c880 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846c980 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d080 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d180 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d280 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d380 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d480 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d580 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d680 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d780 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d880 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846d980 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846da80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846db80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846de80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846df80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e080 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e180 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e280 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e380 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e480 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e580 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e680 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e780 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e880 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846e980 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f080 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f180 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f280 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f380 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f480 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f580 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f680 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f780 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f880 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846f980 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:27:40.069 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:27:40.069 list of memzone associated elements. size: 602.264404 MiB 00:27:40.069 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:27:40.069 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:27:40.069 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:27:40.069 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:27:40.069 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:27:40.069 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62268_0 00:27:40.069 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:27:40.069 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62268_0 00:27:40.069 element at address: 0x200003fff340 with size: 48.003113 MiB 00:27:40.069 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62268_0 00:27:40.069 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:27:40.069 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:27:40.069 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:27:40.069 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:27:40.069 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:27:40.069 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62268 00:27:40.069 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:27:40.069 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62268 00:27:40.069 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:27:40.069 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62268 00:27:40.069 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:27:40.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:27:40.069 element at address: 0x200019abc780 with size: 1.008179 MiB 00:27:40.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:27:40.069 element at address: 0x200018efde00 with size: 1.008179 MiB 00:27:40.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:27:40.069 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:27:40.069 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:27:40.069 element at address: 0x200003eff100 with size: 1.000549 MiB 00:27:40.069 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62268 00:27:40.069 element at address: 0x200003affb80 with size: 1.000549 MiB 00:27:40.069 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62268 00:27:40.069 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:27:40.069 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62268 00:27:40.069 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:27:40.069 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62268 00:27:40.069 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:27:40.069 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62268 00:27:40.069 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:27:40.069 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:27:40.069 element at address: 0x200013878680 with size: 0.500549 MiB 00:27:40.069 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:27:40.069 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:27:40.069 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:27:40.069 element at address: 0x200003adf740 with size: 0.125549 MiB 00:27:40.069 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62268 00:27:40.069 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:27:40.069 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:27:40.070 element at address: 0x200028464140 with size: 0.023804 MiB 00:27:40.070 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:27:40.070 element at address: 0x200003adb500 with size: 0.016174 MiB 00:27:40.070 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62268 00:27:40.070 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:27:40.070 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:27:40.070 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:27:40.070 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62268 00:27:40.070 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:27:40.070 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62268 00:27:40.070 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:27:40.070 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:27:40.070 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:27:40.070 16:04:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62268 00:27:40.070 16:04:44 -- common/autotest_common.sh@926 -- # '[' -z 62268 ']' 00:27:40.070 16:04:44 -- common/autotest_common.sh@930 -- # kill -0 62268 00:27:40.070 16:04:44 -- common/autotest_common.sh@931 -- # uname 00:27:40.070 16:04:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.070 16:04:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62268 00:27:40.070 16:04:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.070 killing process with pid 62268 00:27:40.070 16:04:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.070 16:04:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62268' 00:27:40.070 16:04:44 -- common/autotest_common.sh@945 -- # kill 62268 00:27:40.070 16:04:44 -- common/autotest_common.sh@950 -- # wait 62268 00:27:42.599 00:27:42.599 real 0m4.588s 00:27:42.599 user 0m4.605s 00:27:42.599 sys 0m0.840s 00:27:42.599 16:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.599 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:42.599 ************************************ 00:27:42.599 END TEST dpdk_mem_utility 00:27:42.599 ************************************ 00:27:42.599 16:04:46 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:42.599 16:04:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.599 16:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.599 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:42.599 ************************************ 00:27:42.599 START TEST event 00:27:42.599 ************************************ 00:27:42.599 16:04:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:27:42.857 * Looking for test storage... 00:27:42.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:42.858 16:04:46 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:42.858 16:04:46 -- bdev/nbd_common.sh@6 -- # set -e 00:27:42.858 16:04:46 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:42.858 16:04:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:42.858 16:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.858 16:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:42.858 ************************************ 00:27:42.858 START TEST event_perf 00:27:42.858 ************************************ 00:27:42.858 16:04:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:27:42.858 Running I/O for 1 seconds...[2024-07-22 16:04:46.977053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:42.858 [2024-07-22 16:04:46.978357] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62375 ] 00:27:43.115 [2024-07-22 16:04:47.167556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.373 [2024-07-22 16:04:47.451606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.373 [2024-07-22 16:04:47.451716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.373 [2024-07-22 16:04:47.451844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.373 [2024-07-22 16:04:47.451853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.747 Running I/O for 1 seconds... 00:27:44.747 lcore 0: 120936 00:27:44.747 lcore 1: 120937 00:27:44.747 lcore 2: 120938 00:27:44.747 lcore 3: 120938 00:27:44.747 done. 00:27:44.747 00:27:44.747 real 0m2.003s 00:27:44.747 user 0m4.712s 00:27:44.747 sys 0m0.191s 00:27:44.747 16:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.747 16:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.747 ************************************ 00:27:44.747 END TEST event_perf 00:27:44.747 ************************************ 00:27:44.747 16:04:48 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:44.747 16:04:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:44.747 16:04:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.747 16:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.747 ************************************ 00:27:44.747 START TEST event_reactor 00:27:44.747 ************************************ 00:27:44.747 16:04:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:27:45.006 [2024-07-22 16:04:49.022145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:45.006 [2024-07-22 16:04:49.022357] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62420 ] 00:27:45.006 [2024-07-22 16:04:49.195229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.265 [2024-07-22 16:04:49.476449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.168 test_start 00:27:47.168 oneshot 00:27:47.168 tick 100 00:27:47.168 tick 100 00:27:47.168 tick 250 00:27:47.168 tick 100 00:27:47.168 tick 100 00:27:47.168 tick 250 00:27:47.168 tick 100 00:27:47.168 tick 500 00:27:47.168 tick 100 00:27:47.168 tick 100 00:27:47.168 tick 250 00:27:47.168 tick 100 00:27:47.168 tick 100 00:27:47.168 test_end 00:27:47.168 ************************************ 00:27:47.168 END TEST event_reactor 00:27:47.168 ************************************ 00:27:47.168 00:27:47.168 real 0m1.988s 00:27:47.168 user 0m1.753s 00:27:47.168 sys 0m0.134s 00:27:47.168 16:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.168 16:04:50 -- common/autotest_common.sh@10 -- # set +x 00:27:47.168 16:04:51 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:47.168 16:04:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:27:47.168 16:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.168 16:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:47.168 ************************************ 00:27:47.168 START TEST event_reactor_perf 00:27:47.168 ************************************ 00:27:47.168 16:04:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:27:47.168 [2024-07-22 16:04:51.070975] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:47.168 [2024-07-22 16:04:51.071167] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62462 ] 00:27:47.168 [2024-07-22 16:04:51.245725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.426 [2024-07-22 16:04:51.534004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.802 test_start 00:27:48.802 test_end 00:27:48.802 Performance: 283991 events per second 00:27:48.802 00:27:48.803 real 0m1.923s 00:27:48.803 user 0m1.687s 00:27:48.803 sys 0m0.136s 00:27:48.803 16:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.803 ************************************ 00:27:48.803 16:04:52 -- common/autotest_common.sh@10 -- # set +x 00:27:48.803 END TEST event_reactor_perf 00:27:48.803 ************************************ 00:27:48.803 16:04:52 -- event/event.sh@49 -- # uname -s 00:27:48.803 16:04:52 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:27:48.803 16:04:52 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:48.803 16:04:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:48.803 16:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:48.803 16:04:52 -- common/autotest_common.sh@10 -- # set +x 00:27:48.803 ************************************ 00:27:48.803 START TEST event_scheduler 00:27:48.803 ************************************ 00:27:48.803 16:04:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:27:49.061 * Looking for test storage... 00:27:49.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:27:49.061 16:04:53 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:27:49.061 16:04:53 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62529 00:27:49.061 16:04:53 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:27:49.061 16:04:53 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:27:49.061 16:04:53 -- scheduler/scheduler.sh@37 -- # waitforlisten 62529 00:27:49.061 16:04:53 -- common/autotest_common.sh@819 -- # '[' -z 62529 ']' 00:27:49.061 16:04:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.061 16:04:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:49.061 16:04:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.061 16:04:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:49.061 16:04:53 -- common/autotest_common.sh@10 -- # set +x 00:27:49.061 [2024-07-22 16:04:53.176294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:49.061 [2024-07-22 16:04:53.176491] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62529 ] 00:27:49.318 [2024-07-22 16:04:53.357174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:49.576 [2024-07-22 16:04:53.619316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.576 [2024-07-22 16:04:53.619467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.576 [2024-07-22 16:04:53.619940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:49.576 [2024-07-22 16:04:53.620282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:49.835 16:04:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.835 16:04:54 -- common/autotest_common.sh@852 -- # return 0 00:27:49.835 16:04:54 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:27:49.835 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.835 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:49.835 POWER: Env isn't set yet! 00:27:49.835 POWER: Attempting to initialise ACPI cpufreq power management... 00:27:49.835 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:49.835 POWER: Cannot set governor of lcore 0 to userspace 00:27:49.835 POWER: Attempting to initialise PSTAT power management... 00:27:49.835 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:49.835 POWER: Cannot set governor of lcore 0 to performance 00:27:49.835 POWER: Attempting to initialise AMD PSTATE power management... 00:27:49.835 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:49.835 POWER: Cannot set governor of lcore 0 to userspace 00:27:49.835 POWER: Attempting to initialise CPPC power management... 00:27:49.835 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:27:49.835 POWER: Cannot set governor of lcore 0 to userspace 00:27:49.835 POWER: Attempting to initialise VM power management... 00:27:49.835 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:27:49.835 POWER: Unable to set Power Management Environment for lcore 0 00:27:49.835 [2024-07-22 16:04:54.094278] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:27:49.835 [2024-07-22 16:04:54.094307] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:27:49.835 [2024-07-22 16:04:54.094328] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:27:49.835 [2024-07-22 16:04:54.094359] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:27:49.835 [2024-07-22 16:04:54.094379] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:27:49.835 [2024-07-22 16:04:54.094628] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:27:49.835 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.835 16:04:54 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:27:49.835 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.835 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 [2024-07-22 16:04:54.436163] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:27:50.402 16:04:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:50.402 16:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 ************************************ 00:27:50.402 START TEST scheduler_create_thread 00:27:50.402 ************************************ 00:27:50.402 16:04:54 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 2 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 3 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 4 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 5 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 6 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 7 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 8 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 9 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 10 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:50.402 16:04:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.402 16:04:54 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:27:50.402 16:04:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.402 16:04:54 -- common/autotest_common.sh@10 -- # set +x 00:27:51.337 16:04:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.337 16:04:55 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:27:51.337 16:04:55 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:27:51.337 16:04:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.337 16:04:55 -- common/autotest_common.sh@10 -- # set +x 00:27:52.713 ************************************ 00:27:52.713 END TEST scheduler_create_thread 00:27:52.713 ************************************ 00:27:52.713 16:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.713 00:27:52.713 real 0m2.140s 00:27:52.713 user 0m0.020s 00:27:52.713 sys 0m0.005s 00:27:52.713 16:04:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.713 16:04:56 -- common/autotest_common.sh@10 -- # set +x 00:27:52.713 16:04:56 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:52.713 16:04:56 -- scheduler/scheduler.sh@46 -- # killprocess 62529 00:27:52.713 16:04:56 -- common/autotest_common.sh@926 -- # '[' -z 62529 ']' 00:27:52.713 16:04:56 -- common/autotest_common.sh@930 -- # kill -0 62529 00:27:52.713 16:04:56 -- common/autotest_common.sh@931 -- # uname 00:27:52.713 16:04:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:52.713 16:04:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62529 00:27:52.713 killing process with pid 62529 00:27:52.713 16:04:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:27:52.713 16:04:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:27:52.713 16:04:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62529' 00:27:52.713 16:04:56 -- common/autotest_common.sh@945 -- # kill 62529 00:27:52.713 16:04:56 -- common/autotest_common.sh@950 -- # wait 62529 00:27:52.971 [2024-07-22 16:04:57.071129] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:27:54.345 00:27:54.346 real 0m5.352s 00:27:54.346 user 0m8.513s 00:27:54.346 sys 0m0.610s 00:27:54.346 16:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.346 ************************************ 00:27:54.346 END TEST event_scheduler 00:27:54.346 ************************************ 00:27:54.346 16:04:58 -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 16:04:58 -- event/event.sh@51 -- # modprobe -n nbd 00:27:54.346 16:04:58 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:27:54.346 16:04:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:54.346 16:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.346 16:04:58 -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 ************************************ 00:27:54.346 START TEST app_repeat 00:27:54.346 ************************************ 00:27:54.346 16:04:58 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:27:54.346 16:04:58 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.346 16:04:58 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:54.346 16:04:58 -- event/event.sh@13 -- # local nbd_list 00:27:54.346 16:04:58 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:54.346 16:04:58 -- event/event.sh@14 -- # local bdev_list 00:27:54.346 16:04:58 -- event/event.sh@15 -- # local repeat_times=4 00:27:54.346 16:04:58 -- event/event.sh@17 -- # modprobe nbd 00:27:54.346 16:04:58 -- event/event.sh@19 -- # repeat_pid=62635 00:27:54.346 16:04:58 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:27:54.346 16:04:58 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:27:54.346 Process app_repeat pid: 62635 00:27:54.346 spdk_app_start Round 0 00:27:54.346 16:04:58 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62635' 00:27:54.346 16:04:58 -- event/event.sh@23 -- # for i in {0..2} 00:27:54.346 16:04:58 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:27:54.346 16:04:58 -- event/event.sh@25 -- # waitforlisten 62635 /var/tmp/spdk-nbd.sock 00:27:54.346 16:04:58 -- common/autotest_common.sh@819 -- # '[' -z 62635 ']' 00:27:54.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:54.346 16:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:54.346 16:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:54.346 16:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:54.346 16:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:54.346 16:04:58 -- common/autotest_common.sh@10 -- # set +x 00:27:54.346 [2024-07-22 16:04:58.486805] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:54.346 [2024-07-22 16:04:58.487113] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62635 ] 00:27:54.604 [2024-07-22 16:04:58.677443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:54.862 [2024-07-22 16:04:58.936655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.862 [2024-07-22 16:04:58.936670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.490 16:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:55.490 16:04:59 -- common/autotest_common.sh@852 -- # return 0 00:27:55.490 16:04:59 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:55.747 Malloc0 00:27:55.747 16:04:59 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:56.005 Malloc1 00:27:56.005 16:05:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@12 -- # local i 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.005 16:05:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:56.262 /dev/nbd0 00:27:56.262 16:05:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:56.262 16:05:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:56.262 16:05:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:56.262 16:05:00 -- common/autotest_common.sh@857 -- # local i 00:27:56.262 16:05:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:56.262 16:05:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:56.262 16:05:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:56.262 16:05:00 -- common/autotest_common.sh@861 -- # break 00:27:56.262 16:05:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:56.262 16:05:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:56.262 16:05:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:56.262 1+0 records in 00:27:56.262 1+0 records out 00:27:56.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331417 s, 12.4 MB/s 00:27:56.262 16:05:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:56.262 16:05:00 -- common/autotest_common.sh@874 -- # size=4096 00:27:56.262 16:05:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:56.262 16:05:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:56.262 16:05:00 -- common/autotest_common.sh@877 -- # return 0 00:27:56.262 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.262 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.262 16:05:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:56.520 /dev/nbd1 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:56.520 16:05:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:27:56.520 16:05:00 -- common/autotest_common.sh@857 -- # local i 00:27:56.520 16:05:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:56.520 16:05:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:56.520 16:05:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:27:56.520 16:05:00 -- common/autotest_common.sh@861 -- # break 00:27:56.520 16:05:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:56.520 16:05:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:56.520 16:05:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:56.520 1+0 records in 00:27:56.520 1+0 records out 00:27:56.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353944 s, 11.6 MB/s 00:27:56.520 16:05:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:56.520 16:05:00 -- common/autotest_common.sh@874 -- # size=4096 00:27:56.520 16:05:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:56.520 16:05:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:56.520 16:05:00 -- common/autotest_common.sh@877 -- # return 0 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.520 16:05:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:56.778 { 00:27:56.778 "nbd_device": "/dev/nbd0", 00:27:56.778 "bdev_name": "Malloc0" 00:27:56.778 }, 00:27:56.778 { 00:27:56.778 "nbd_device": "/dev/nbd1", 00:27:56.778 "bdev_name": "Malloc1" 00:27:56.778 } 00:27:56.778 ]' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:56.778 { 00:27:56.778 "nbd_device": "/dev/nbd0", 00:27:56.778 "bdev_name": "Malloc0" 00:27:56.778 }, 00:27:56.778 { 00:27:56.778 "nbd_device": "/dev/nbd1", 00:27:56.778 "bdev_name": "Malloc1" 00:27:56.778 } 00:27:56.778 ]' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:56.778 /dev/nbd1' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:56.778 /dev/nbd1' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@65 -- # count=2 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@66 -- # echo 2 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@95 -- # count=2 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:56.778 256+0 records in 00:27:56.778 256+0 records out 00:27:56.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764285 s, 137 MB/s 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:56.778 16:05:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:56.778 256+0 records in 00:27:56.778 256+0 records out 00:27:56.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027842 s, 37.7 MB/s 00:27:56.778 16:05:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:56.778 16:05:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:57.035 256+0 records in 00:27:57.035 256+0 records out 00:27:57.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0376785 s, 27.8 MB/s 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@51 -- # local i 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:57.035 16:05:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@41 -- # break 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:57.293 16:05:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@41 -- # break 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.550 16:05:01 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@65 -- # true 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@65 -- # count=0 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@104 -- # count=0 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:57.808 16:05:01 -- bdev/nbd_common.sh@109 -- # return 0 00:27:57.808 16:05:01 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:58.375 16:05:02 -- event/event.sh@35 -- # sleep 3 00:27:59.807 [2024-07-22 16:05:03.663111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:59.807 [2024-07-22 16:05:03.917417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.807 [2024-07-22 16:05:03.917420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.065 [2024-07-22 16:05:04.125610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:00.065 [2024-07-22 16:05:04.126718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:01.442 16:05:05 -- event/event.sh@23 -- # for i in {0..2} 00:28:01.442 spdk_app_start Round 1 00:28:01.442 16:05:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:28:01.442 16:05:05 -- event/event.sh@25 -- # waitforlisten 62635 /var/tmp/spdk-nbd.sock 00:28:01.442 16:05:05 -- common/autotest_common.sh@819 -- # '[' -z 62635 ']' 00:28:01.442 16:05:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:01.442 16:05:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:01.442 16:05:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:01.442 16:05:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:01.442 16:05:05 -- common/autotest_common.sh@10 -- # set +x 00:28:01.442 16:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:01.442 16:05:05 -- common/autotest_common.sh@852 -- # return 0 00:28:01.442 16:05:05 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:01.700 Malloc0 00:28:01.958 16:05:05 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:02.217 Malloc1 00:28:02.217 16:05:06 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@12 -- # local i 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:02.217 /dev/nbd0 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:02.217 16:05:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:02.217 16:05:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:02.217 16:05:06 -- common/autotest_common.sh@857 -- # local i 00:28:02.217 16:05:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:02.217 16:05:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:02.217 16:05:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:02.476 16:05:06 -- common/autotest_common.sh@861 -- # break 00:28:02.476 16:05:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:02.476 16:05:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:02.476 16:05:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:02.476 1+0 records in 00:28:02.476 1+0 records out 00:28:02.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473228 s, 8.7 MB/s 00:28:02.476 16:05:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:02.476 16:05:06 -- common/autotest_common.sh@874 -- # size=4096 00:28:02.476 16:05:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:02.476 16:05:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:02.476 16:05:06 -- common/autotest_common.sh@877 -- # return 0 00:28:02.476 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:02.476 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:02.476 16:05:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:02.735 /dev/nbd1 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:02.735 16:05:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:02.735 16:05:06 -- common/autotest_common.sh@857 -- # local i 00:28:02.735 16:05:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:02.735 16:05:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:02.735 16:05:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:02.735 16:05:06 -- common/autotest_common.sh@861 -- # break 00:28:02.735 16:05:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:02.735 16:05:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:02.735 16:05:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:02.735 1+0 records in 00:28:02.735 1+0 records out 00:28:02.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253517 s, 16.2 MB/s 00:28:02.735 16:05:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:02.735 16:05:06 -- common/autotest_common.sh@874 -- # size=4096 00:28:02.735 16:05:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:02.735 16:05:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:02.735 16:05:06 -- common/autotest_common.sh@877 -- # return 0 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.735 16:05:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:02.993 { 00:28:02.993 "nbd_device": "/dev/nbd0", 00:28:02.993 "bdev_name": "Malloc0" 00:28:02.993 }, 00:28:02.993 { 00:28:02.993 "nbd_device": "/dev/nbd1", 00:28:02.993 "bdev_name": "Malloc1" 00:28:02.993 } 00:28:02.993 ]' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:02.993 { 00:28:02.993 "nbd_device": "/dev/nbd0", 00:28:02.993 "bdev_name": "Malloc0" 00:28:02.993 }, 00:28:02.993 { 00:28:02.993 "nbd_device": "/dev/nbd1", 00:28:02.993 "bdev_name": "Malloc1" 00:28:02.993 } 00:28:02.993 ]' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:02.993 /dev/nbd1' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:02.993 /dev/nbd1' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@65 -- # count=2 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@95 -- # count=2 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:02.993 256+0 records in 00:28:02.993 256+0 records out 00:28:02.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100201 s, 105 MB/s 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:02.993 256+0 records in 00:28:02.993 256+0 records out 00:28:02.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256536 s, 40.9 MB/s 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:02.993 256+0 records in 00:28:02.993 256+0 records out 00:28:02.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031886 s, 32.9 MB/s 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@51 -- # local i 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:02.993 16:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@41 -- # break 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:03.252 16:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@41 -- # break 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:03.510 16:05:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:03.769 16:05:07 -- bdev/nbd_common.sh@65 -- # true 00:28:03.769 16:05:08 -- bdev/nbd_common.sh@65 -- # count=0 00:28:03.769 16:05:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:03.769 16:05:08 -- bdev/nbd_common.sh@104 -- # count=0 00:28:03.769 16:05:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:03.769 16:05:08 -- bdev/nbd_common.sh@109 -- # return 0 00:28:03.769 16:05:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:04.349 16:05:08 -- event/event.sh@35 -- # sleep 3 00:28:05.722 [2024-07-22 16:05:09.713111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:05.722 [2024-07-22 16:05:09.961942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.722 [2024-07-22 16:05:09.961947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.980 [2024-07-22 16:05:10.172640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:05.980 [2024-07-22 16:05:10.173042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:07.380 16:05:11 -- event/event.sh@23 -- # for i in {0..2} 00:28:07.380 16:05:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:28:07.380 spdk_app_start Round 2 00:28:07.380 16:05:11 -- event/event.sh@25 -- # waitforlisten 62635 /var/tmp/spdk-nbd.sock 00:28:07.380 16:05:11 -- common/autotest_common.sh@819 -- # '[' -z 62635 ']' 00:28:07.380 16:05:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:07.380 16:05:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.380 16:05:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:07.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:07.380 16:05:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.380 16:05:11 -- common/autotest_common.sh@10 -- # set +x 00:28:07.638 16:05:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.638 16:05:11 -- common/autotest_common.sh@852 -- # return 0 00:28:07.638 16:05:11 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:07.897 Malloc0 00:28:07.897 16:05:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:08.156 Malloc1 00:28:08.156 16:05:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@12 -- # local i 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:08.156 16:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:08.416 /dev/nbd0 00:28:08.416 16:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:08.416 16:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:08.416 16:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:08.416 16:05:12 -- common/autotest_common.sh@857 -- # local i 00:28:08.416 16:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:08.416 16:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:08.416 16:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:08.416 16:05:12 -- common/autotest_common.sh@861 -- # break 00:28:08.416 16:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:08.416 16:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:08.416 16:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:08.416 1+0 records in 00:28:08.416 1+0 records out 00:28:08.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449013 s, 9.1 MB/s 00:28:08.416 16:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:08.416 16:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:28:08.416 16:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:08.416 16:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:08.416 16:05:12 -- common/autotest_common.sh@877 -- # return 0 00:28:08.416 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:08.416 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:08.416 16:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:08.675 /dev/nbd1 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:08.675 16:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:08.675 16:05:12 -- common/autotest_common.sh@857 -- # local i 00:28:08.675 16:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:08.675 16:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:08.675 16:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:08.675 16:05:12 -- common/autotest_common.sh@861 -- # break 00:28:08.675 16:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:08.675 16:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:08.675 16:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:08.675 1+0 records in 00:28:08.675 1+0 records out 00:28:08.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340505 s, 12.0 MB/s 00:28:08.675 16:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:08.675 16:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:28:08.675 16:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:08.675 16:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:08.675 16:05:12 -- common/autotest_common.sh@877 -- # return 0 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.675 16:05:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:09.242 { 00:28:09.242 "nbd_device": "/dev/nbd0", 00:28:09.242 "bdev_name": "Malloc0" 00:28:09.242 }, 00:28:09.242 { 00:28:09.242 "nbd_device": "/dev/nbd1", 00:28:09.242 "bdev_name": "Malloc1" 00:28:09.242 } 00:28:09.242 ]' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:09.242 { 00:28:09.242 "nbd_device": "/dev/nbd0", 00:28:09.242 "bdev_name": "Malloc0" 00:28:09.242 }, 00:28:09.242 { 00:28:09.242 "nbd_device": "/dev/nbd1", 00:28:09.242 "bdev_name": "Malloc1" 00:28:09.242 } 00:28:09.242 ]' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:09.242 /dev/nbd1' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:09.242 /dev/nbd1' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@65 -- # count=2 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@95 -- # count=2 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:09.242 256+0 records in 00:28:09.242 256+0 records out 00:28:09.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00961981 s, 109 MB/s 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:09.242 256+0 records in 00:28:09.242 256+0 records out 00:28:09.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264887 s, 39.6 MB/s 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:09.242 256+0 records in 00:28:09.242 256+0 records out 00:28:09.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312738 s, 33.5 MB/s 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@51 -- # local i 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.242 16:05:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@41 -- # break 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.501 16:05:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@41 -- # break 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.759 16:05:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@65 -- # true 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@65 -- # count=0 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@104 -- # count=0 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:10.018 16:05:14 -- bdev/nbd_common.sh@109 -- # return 0 00:28:10.018 16:05:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:10.276 16:05:14 -- event/event.sh@35 -- # sleep 3 00:28:11.652 [2024-07-22 16:05:15.800437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:11.910 [2024-07-22 16:05:16.043072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.910 [2024-07-22 16:05:16.043072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.169 [2024-07-22 16:05:16.253694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:12.169 [2024-07-22 16:05:16.253792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:13.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:13.544 16:05:17 -- event/event.sh@38 -- # waitforlisten 62635 /var/tmp/spdk-nbd.sock 00:28:13.544 16:05:17 -- common/autotest_common.sh@819 -- # '[' -z 62635 ']' 00:28:13.544 16:05:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:13.544 16:05:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.544 16:05:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:13.544 16:05:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.544 16:05:17 -- common/autotest_common.sh@10 -- # set +x 00:28:13.544 16:05:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:13.544 16:05:17 -- common/autotest_common.sh@852 -- # return 0 00:28:13.544 16:05:17 -- event/event.sh@39 -- # killprocess 62635 00:28:13.544 16:05:17 -- common/autotest_common.sh@926 -- # '[' -z 62635 ']' 00:28:13.544 16:05:17 -- common/autotest_common.sh@930 -- # kill -0 62635 00:28:13.544 16:05:17 -- common/autotest_common.sh@931 -- # uname 00:28:13.544 16:05:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:13.544 16:05:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 62635 00:28:13.802 killing process with pid 62635 00:28:13.802 16:05:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:13.802 16:05:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:13.802 16:05:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 62635' 00:28:13.802 16:05:17 -- common/autotest_common.sh@945 -- # kill 62635 00:28:13.802 16:05:17 -- common/autotest_common.sh@950 -- # wait 62635 00:28:15.177 spdk_app_start is called in Round 0. 00:28:15.177 Shutdown signal received, stop current app iteration 00:28:15.177 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:28:15.177 spdk_app_start is called in Round 1. 00:28:15.177 Shutdown signal received, stop current app iteration 00:28:15.177 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:28:15.177 spdk_app_start is called in Round 2. 00:28:15.177 Shutdown signal received, stop current app iteration 00:28:15.177 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:28:15.177 spdk_app_start is called in Round 3. 00:28:15.177 Shutdown signal received, stop current app iteration 00:28:15.177 16:05:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:28:15.177 16:05:19 -- event/event.sh@42 -- # return 0 00:28:15.177 00:28:15.177 real 0m20.603s 00:28:15.177 user 0m43.334s 00:28:15.177 sys 0m3.361s 00:28:15.177 16:05:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.177 ************************************ 00:28:15.177 END TEST app_repeat 00:28:15.177 ************************************ 00:28:15.177 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:15.177 16:05:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:28:15.177 16:05:19 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:15.177 16:05:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:15.177 16:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:15.177 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:15.177 ************************************ 00:28:15.177 START TEST cpu_locks 00:28:15.177 ************************************ 00:28:15.177 16:05:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:15.177 * Looking for test storage... 00:28:15.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:15.177 16:05:19 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:28:15.177 16:05:19 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:28:15.177 16:05:19 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:28:15.177 16:05:19 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:28:15.177 16:05:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:15.177 16:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:15.177 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:15.178 ************************************ 00:28:15.178 START TEST default_locks 00:28:15.178 ************************************ 00:28:15.178 16:05:19 -- common/autotest_common.sh@1104 -- # default_locks 00:28:15.178 16:05:19 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63131 00:28:15.178 16:05:19 -- event/cpu_locks.sh@47 -- # waitforlisten 63131 00:28:15.178 16:05:19 -- common/autotest_common.sh@819 -- # '[' -z 63131 ']' 00:28:15.178 16:05:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.178 16:05:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:15.178 16:05:19 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:15.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.178 16:05:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.178 16:05:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:15.178 16:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:15.178 [2024-07-22 16:05:19.240315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:15.178 [2024-07-22 16:05:19.241840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63131 ] 00:28:15.178 [2024-07-22 16:05:19.405508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.442 [2024-07-22 16:05:19.650094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:15.442 [2024-07-22 16:05:19.650694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.831 16:05:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:16.831 16:05:21 -- common/autotest_common.sh@852 -- # return 0 00:28:16.831 16:05:21 -- event/cpu_locks.sh@49 -- # locks_exist 63131 00:28:16.831 16:05:21 -- event/cpu_locks.sh@22 -- # lslocks -p 63131 00:28:16.831 16:05:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:17.396 16:05:21 -- event/cpu_locks.sh@50 -- # killprocess 63131 00:28:17.396 16:05:21 -- common/autotest_common.sh@926 -- # '[' -z 63131 ']' 00:28:17.396 16:05:21 -- common/autotest_common.sh@930 -- # kill -0 63131 00:28:17.396 16:05:21 -- common/autotest_common.sh@931 -- # uname 00:28:17.396 16:05:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:17.396 16:05:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63131 00:28:17.396 killing process with pid 63131 00:28:17.396 16:05:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:17.396 16:05:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:17.396 16:05:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63131' 00:28:17.396 16:05:21 -- common/autotest_common.sh@945 -- # kill 63131 00:28:17.396 16:05:21 -- common/autotest_common.sh@950 -- # wait 63131 00:28:19.930 16:05:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63131 00:28:19.930 16:05:23 -- common/autotest_common.sh@640 -- # local es=0 00:28:19.930 16:05:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 63131 00:28:19.930 16:05:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:28:19.930 16:05:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.930 16:05:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:28:19.930 16:05:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:19.930 16:05:23 -- common/autotest_common.sh@643 -- # waitforlisten 63131 00:28:19.930 16:05:23 -- common/autotest_common.sh@819 -- # '[' -z 63131 ']' 00:28:19.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.930 16:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.930 16:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.930 16:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.930 16:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.930 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 ERROR: process (pid: 63131) is no longer running 00:28:19.930 ************************************ 00:28:19.930 END TEST default_locks 00:28:19.930 ************************************ 00:28:19.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (63131) - No such process 00:28:19.930 16:05:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:19.930 16:05:23 -- common/autotest_common.sh@852 -- # return 1 00:28:19.930 16:05:23 -- common/autotest_common.sh@643 -- # es=1 00:28:19.930 16:05:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:19.930 16:05:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:19.930 16:05:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:19.930 16:05:23 -- event/cpu_locks.sh@54 -- # no_locks 00:28:19.930 16:05:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:19.930 16:05:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:28:19.930 16:05:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:19.930 00:28:19.930 real 0m4.748s 00:28:19.930 user 0m4.916s 00:28:19.930 sys 0m0.882s 00:28:19.930 16:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.930 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 16:05:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:28:19.930 16:05:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:19.930 16:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:19.930 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 ************************************ 00:28:19.930 START TEST default_locks_via_rpc 00:28:19.930 ************************************ 00:28:19.930 16:05:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:28:19.930 16:05:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63214 00:28:19.930 16:05:23 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:19.930 16:05:23 -- event/cpu_locks.sh@63 -- # waitforlisten 63214 00:28:19.930 16:05:23 -- common/autotest_common.sh@819 -- # '[' -z 63214 ']' 00:28:19.930 16:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.930 16:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:19.930 16:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.930 16:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:19.930 16:05:23 -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 [2024-07-22 16:05:24.048548] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:19.930 [2024-07-22 16:05:24.048717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63214 ] 00:28:20.188 [2024-07-22 16:05:24.218284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.446 [2024-07-22 16:05:24.475400] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:20.446 [2024-07-22 16:05:24.475664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.379 16:05:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:21.379 16:05:25 -- common/autotest_common.sh@852 -- # return 0 00:28:21.379 16:05:25 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:28:21.379 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.379 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:21.637 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.637 16:05:25 -- event/cpu_locks.sh@67 -- # no_locks 00:28:21.637 16:05:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:21.637 16:05:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:28:21.637 16:05:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:21.637 16:05:25 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:28:21.637 16:05:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.637 16:05:25 -- common/autotest_common.sh@10 -- # set +x 00:28:21.637 16:05:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.637 16:05:25 -- event/cpu_locks.sh@71 -- # locks_exist 63214 00:28:21.637 16:05:25 -- event/cpu_locks.sh@22 -- # lslocks -p 63214 00:28:21.637 16:05:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:21.897 16:05:26 -- event/cpu_locks.sh@73 -- # killprocess 63214 00:28:21.897 16:05:26 -- common/autotest_common.sh@926 -- # '[' -z 63214 ']' 00:28:21.897 16:05:26 -- common/autotest_common.sh@930 -- # kill -0 63214 00:28:21.897 16:05:26 -- common/autotest_common.sh@931 -- # uname 00:28:21.897 16:05:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:21.897 16:05:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63214 00:28:21.897 killing process with pid 63214 00:28:21.897 16:05:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:21.897 16:05:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:21.897 16:05:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63214' 00:28:21.897 16:05:26 -- common/autotest_common.sh@945 -- # kill 63214 00:28:21.897 16:05:26 -- common/autotest_common.sh@950 -- # wait 63214 00:28:24.429 00:28:24.429 real 0m4.628s 00:28:24.429 user 0m4.654s 00:28:24.429 sys 0m0.820s 00:28:24.429 16:05:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.429 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.429 ************************************ 00:28:24.429 END TEST default_locks_via_rpc 00:28:24.429 ************************************ 00:28:24.429 16:05:28 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:28:24.429 16:05:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.429 16:05:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.429 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.429 ************************************ 00:28:24.429 START TEST non_locking_app_on_locked_coremask 00:28:24.429 ************************************ 00:28:24.429 16:05:28 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:28:24.429 16:05:28 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63296 00:28:24.429 16:05:28 -- event/cpu_locks.sh@81 -- # waitforlisten 63296 /var/tmp/spdk.sock 00:28:24.429 16:05:28 -- common/autotest_common.sh@819 -- # '[' -z 63296 ']' 00:28:24.429 16:05:28 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:24.429 16:05:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.429 16:05:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.429 16:05:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.429 16:05:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.429 16:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:24.688 [2024-07-22 16:05:28.742745] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.688 [2024-07-22 16:05:28.742950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63296 ] 00:28:24.688 [2024-07-22 16:05:28.920899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.968 [2024-07-22 16:05:29.221090] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.968 [2024-07-22 16:05:29.221415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.358 16:05:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.358 16:05:30 -- common/autotest_common.sh@852 -- # return 0 00:28:26.358 16:05:30 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63319 00:28:26.358 16:05:30 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:28:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:26.359 16:05:30 -- event/cpu_locks.sh@85 -- # waitforlisten 63319 /var/tmp/spdk2.sock 00:28:26.359 16:05:30 -- common/autotest_common.sh@819 -- # '[' -z 63319 ']' 00:28:26.359 16:05:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:26.359 16:05:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:26.359 16:05:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:26.359 16:05:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:26.359 16:05:30 -- common/autotest_common.sh@10 -- # set +x 00:28:26.359 [2024-07-22 16:05:30.464954] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:26.359 [2024-07-22 16:05:30.465628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63319 ] 00:28:26.617 [2024-07-22 16:05:30.655085] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:26.617 [2024-07-22 16:05:30.655184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.186 [2024-07-22 16:05:31.156550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:27.186 [2024-07-22 16:05:31.156790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.105 16:05:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:29.105 16:05:32 -- common/autotest_common.sh@852 -- # return 0 00:28:29.105 16:05:32 -- event/cpu_locks.sh@87 -- # locks_exist 63296 00:28:29.105 16:05:32 -- event/cpu_locks.sh@22 -- # lslocks -p 63296 00:28:29.105 16:05:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:29.673 16:05:33 -- event/cpu_locks.sh@89 -- # killprocess 63296 00:28:29.673 16:05:33 -- common/autotest_common.sh@926 -- # '[' -z 63296 ']' 00:28:29.673 16:05:33 -- common/autotest_common.sh@930 -- # kill -0 63296 00:28:29.673 16:05:33 -- common/autotest_common.sh@931 -- # uname 00:28:29.673 16:05:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:29.673 16:05:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63296 00:28:29.673 killing process with pid 63296 00:28:29.673 16:05:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:29.673 16:05:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:29.673 16:05:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63296' 00:28:29.673 16:05:33 -- common/autotest_common.sh@945 -- # kill 63296 00:28:29.673 16:05:33 -- common/autotest_common.sh@950 -- # wait 63296 00:28:34.940 16:05:38 -- event/cpu_locks.sh@90 -- # killprocess 63319 00:28:34.940 16:05:38 -- common/autotest_common.sh@926 -- # '[' -z 63319 ']' 00:28:34.940 16:05:38 -- common/autotest_common.sh@930 -- # kill -0 63319 00:28:34.940 16:05:38 -- common/autotest_common.sh@931 -- # uname 00:28:34.940 16:05:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:34.940 16:05:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63319 00:28:34.940 killing process with pid 63319 00:28:34.940 16:05:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:34.940 16:05:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:34.940 16:05:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63319' 00:28:34.940 16:05:38 -- common/autotest_common.sh@945 -- # kill 63319 00:28:34.940 16:05:38 -- common/autotest_common.sh@950 -- # wait 63319 00:28:36.844 ************************************ 00:28:36.844 END TEST non_locking_app_on_locked_coremask 00:28:36.844 ************************************ 00:28:36.844 00:28:36.844 real 0m12.181s 00:28:36.844 user 0m12.833s 00:28:36.844 sys 0m1.842s 00:28:36.844 16:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.844 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:36.844 16:05:40 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:28:36.844 16:05:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:36.844 16:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.844 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:36.844 ************************************ 00:28:36.844 START TEST locking_app_on_unlocked_coremask 00:28:36.844 ************************************ 00:28:36.844 16:05:40 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:28:36.844 16:05:40 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63469 00:28:36.844 16:05:40 -- event/cpu_locks.sh@99 -- # waitforlisten 63469 /var/tmp/spdk.sock 00:28:36.844 16:05:40 -- common/autotest_common.sh@819 -- # '[' -z 63469 ']' 00:28:36.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.844 16:05:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.844 16:05:40 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:28:36.844 16:05:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:36.844 16:05:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.844 16:05:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:36.844 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:36.844 [2024-07-22 16:05:40.970851] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:36.845 [2024-07-22 16:05:40.971306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63469 ] 00:28:37.102 [2024-07-22 16:05:41.137803] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:37.102 [2024-07-22 16:05:41.137876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.360 [2024-07-22 16:05:41.424352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:37.360 [2024-07-22 16:05:41.424640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.732 16:05:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:38.732 16:05:42 -- common/autotest_common.sh@852 -- # return 0 00:28:38.732 16:05:42 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63493 00:28:38.732 16:05:42 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:38.732 16:05:42 -- event/cpu_locks.sh@103 -- # waitforlisten 63493 /var/tmp/spdk2.sock 00:28:38.732 16:05:42 -- common/autotest_common.sh@819 -- # '[' -z 63493 ']' 00:28:38.732 16:05:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:38.732 16:05:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:38.732 16:05:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:38.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:38.732 16:05:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:38.732 16:05:42 -- common/autotest_common.sh@10 -- # set +x 00:28:38.732 [2024-07-22 16:05:42.759723] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:38.732 [2024-07-22 16:05:42.760154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63493 ] 00:28:38.732 [2024-07-22 16:05:42.936674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.299 [2024-07-22 16:05:43.460001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:39.299 [2024-07-22 16:05:43.460274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.199 16:05:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:41.199 16:05:45 -- common/autotest_common.sh@852 -- # return 0 00:28:41.199 16:05:45 -- event/cpu_locks.sh@105 -- # locks_exist 63493 00:28:41.199 16:05:45 -- event/cpu_locks.sh@22 -- # lslocks -p 63493 00:28:41.199 16:05:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:42.134 16:05:46 -- event/cpu_locks.sh@107 -- # killprocess 63469 00:28:42.134 16:05:46 -- common/autotest_common.sh@926 -- # '[' -z 63469 ']' 00:28:42.134 16:05:46 -- common/autotest_common.sh@930 -- # kill -0 63469 00:28:42.134 16:05:46 -- common/autotest_common.sh@931 -- # uname 00:28:42.134 16:05:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:42.134 16:05:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63469 00:28:42.134 killing process with pid 63469 00:28:42.134 16:05:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:42.134 16:05:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:42.134 16:05:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63469' 00:28:42.134 16:05:46 -- common/autotest_common.sh@945 -- # kill 63469 00:28:42.134 16:05:46 -- common/autotest_common.sh@950 -- # wait 63469 00:28:47.451 16:05:51 -- event/cpu_locks.sh@108 -- # killprocess 63493 00:28:47.451 16:05:51 -- common/autotest_common.sh@926 -- # '[' -z 63493 ']' 00:28:47.451 16:05:51 -- common/autotest_common.sh@930 -- # kill -0 63493 00:28:47.451 16:05:51 -- common/autotest_common.sh@931 -- # uname 00:28:47.451 16:05:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:47.451 16:05:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63493 00:28:47.451 killing process with pid 63493 00:28:47.451 16:05:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:47.451 16:05:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:47.451 16:05:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63493' 00:28:47.451 16:05:51 -- common/autotest_common.sh@945 -- # kill 63493 00:28:47.451 16:05:51 -- common/autotest_common.sh@950 -- # wait 63493 00:28:49.350 00:28:49.350 real 0m12.691s 00:28:49.350 user 0m13.278s 00:28:49.350 sys 0m1.947s 00:28:49.350 16:05:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.350 ************************************ 00:28:49.350 END TEST locking_app_on_unlocked_coremask 00:28:49.350 ************************************ 00:28:49.350 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:28:49.608 16:05:53 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:49.608 16:05:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:49.608 16:05:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.608 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:28:49.608 ************************************ 00:28:49.608 START TEST locking_app_on_locked_coremask 00:28:49.608 ************************************ 00:28:49.608 16:05:53 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:28:49.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.608 16:05:53 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63649 00:28:49.608 16:05:53 -- event/cpu_locks.sh@116 -- # waitforlisten 63649 /var/tmp/spdk.sock 00:28:49.608 16:05:53 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:49.608 16:05:53 -- common/autotest_common.sh@819 -- # '[' -z 63649 ']' 00:28:49.608 16:05:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.608 16:05:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:49.608 16:05:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.608 16:05:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:49.608 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:28:49.608 [2024-07-22 16:05:53.730710] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:49.608 [2024-07-22 16:05:53.730885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63649 ] 00:28:49.866 [2024-07-22 16:05:53.901056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.123 [2024-07-22 16:05:54.184566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.123 [2024-07-22 16:05:54.184830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.066 16:05:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.066 16:05:55 -- common/autotest_common.sh@852 -- # return 0 00:28:51.066 16:05:55 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63673 00:28:51.066 16:05:55 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63673 /var/tmp/spdk2.sock 00:28:51.066 16:05:55 -- common/autotest_common.sh@640 -- # local es=0 00:28:51.066 16:05:55 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 63673 /var/tmp/spdk2.sock 00:28:51.066 16:05:55 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:28:51.066 16:05:55 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:51.066 16:05:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.066 16:05:55 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:28:51.066 16:05:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:51.066 16:05:55 -- common/autotest_common.sh@643 -- # waitforlisten 63673 /var/tmp/spdk2.sock 00:28:51.066 16:05:55 -- common/autotest_common.sh@819 -- # '[' -z 63673 ']' 00:28:51.066 16:05:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:51.066 16:05:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:51.066 16:05:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:51.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:51.066 16:05:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:51.066 16:05:55 -- common/autotest_common.sh@10 -- # set +x 00:28:51.324 [2024-07-22 16:05:55.392641] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:51.324 [2024-07-22 16:05:55.393026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63673 ] 00:28:51.324 [2024-07-22 16:05:55.566479] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63649 has claimed it. 00:28:51.324 [2024-07-22 16:05:55.566597] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:51.890 ERROR: process (pid: 63673) is no longer running 00:28:51.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (63673) - No such process 00:28:51.890 16:05:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.890 16:05:56 -- common/autotest_common.sh@852 -- # return 1 00:28:51.890 16:05:56 -- common/autotest_common.sh@643 -- # es=1 00:28:51.890 16:05:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:51.890 16:05:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:51.890 16:05:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:51.890 16:05:56 -- event/cpu_locks.sh@122 -- # locks_exist 63649 00:28:51.890 16:05:56 -- event/cpu_locks.sh@22 -- # lslocks -p 63649 00:28:51.890 16:05:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:52.457 16:05:56 -- event/cpu_locks.sh@124 -- # killprocess 63649 00:28:52.457 16:05:56 -- common/autotest_common.sh@926 -- # '[' -z 63649 ']' 00:28:52.457 16:05:56 -- common/autotest_common.sh@930 -- # kill -0 63649 00:28:52.457 16:05:56 -- common/autotest_common.sh@931 -- # uname 00:28:52.457 16:05:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:52.457 16:05:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63649 00:28:52.457 killing process with pid 63649 00:28:52.457 16:05:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:52.457 16:05:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:52.457 16:05:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63649' 00:28:52.457 16:05:56 -- common/autotest_common.sh@945 -- # kill 63649 00:28:52.457 16:05:56 -- common/autotest_common.sh@950 -- # wait 63649 00:28:54.988 ************************************ 00:28:54.988 END TEST locking_app_on_locked_coremask 00:28:54.988 ************************************ 00:28:54.988 00:28:54.988 real 0m5.226s 00:28:54.988 user 0m5.465s 00:28:54.988 sys 0m1.013s 00:28:54.988 16:05:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.988 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 16:05:58 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:54.988 16:05:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.988 16:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.988 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 ************************************ 00:28:54.988 START TEST locking_overlapped_coremask 00:28:54.988 ************************************ 00:28:54.988 16:05:58 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:28:54.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.988 16:05:58 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63742 00:28:54.988 16:05:58 -- event/cpu_locks.sh@133 -- # waitforlisten 63742 /var/tmp/spdk.sock 00:28:54.988 16:05:58 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:54.988 16:05:58 -- common/autotest_common.sh@819 -- # '[' -z 63742 ']' 00:28:54.988 16:05:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.988 16:05:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:54.988 16:05:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.988 16:05:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:54.988 16:05:58 -- common/autotest_common.sh@10 -- # set +x 00:28:54.988 [2024-07-22 16:05:59.026555] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:54.988 [2024-07-22 16:05:59.026783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63742 ] 00:28:54.988 [2024-07-22 16:05:59.207968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.246 [2024-07-22 16:05:59.454605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:55.246 [2024-07-22 16:05:59.455041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.246 [2024-07-22 16:05:59.455885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.246 [2024-07-22 16:05:59.455907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.677 16:06:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:56.677 16:06:00 -- common/autotest_common.sh@852 -- # return 0 00:28:56.677 16:06:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63768 00:28:56.677 16:06:00 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:56.677 16:06:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63768 /var/tmp/spdk2.sock 00:28:56.677 16:06:00 -- common/autotest_common.sh@640 -- # local es=0 00:28:56.677 16:06:00 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 63768 /var/tmp/spdk2.sock 00:28:56.677 16:06:00 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:28:56.677 16:06:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:56.677 16:06:00 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:28:56.677 16:06:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:56.677 16:06:00 -- common/autotest_common.sh@643 -- # waitforlisten 63768 /var/tmp/spdk2.sock 00:28:56.677 16:06:00 -- common/autotest_common.sh@819 -- # '[' -z 63768 ']' 00:28:56.677 16:06:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:56.677 16:06:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:56.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:56.677 16:06:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:56.677 16:06:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:56.677 16:06:00 -- common/autotest_common.sh@10 -- # set +x 00:28:56.677 [2024-07-22 16:06:00.821035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:56.677 [2024-07-22 16:06:00.821625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63768 ] 00:28:56.936 [2024-07-22 16:06:01.005855] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63742 has claimed it. 00:28:56.936 [2024-07-22 16:06:01.010063] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:57.502 ERROR: process (pid: 63768) is no longer running 00:28:57.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (63768) - No such process 00:28:57.502 16:06:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:57.502 16:06:01 -- common/autotest_common.sh@852 -- # return 1 00:28:57.502 16:06:01 -- common/autotest_common.sh@643 -- # es=1 00:28:57.502 16:06:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:57.502 16:06:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:57.502 16:06:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:57.502 16:06:01 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:57.502 16:06:01 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:57.502 16:06:01 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:57.502 16:06:01 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:57.502 16:06:01 -- event/cpu_locks.sh@141 -- # killprocess 63742 00:28:57.502 16:06:01 -- common/autotest_common.sh@926 -- # '[' -z 63742 ']' 00:28:57.503 16:06:01 -- common/autotest_common.sh@930 -- # kill -0 63742 00:28:57.503 16:06:01 -- common/autotest_common.sh@931 -- # uname 00:28:57.503 16:06:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:57.503 16:06:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63742 00:28:57.503 16:06:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:57.503 16:06:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:57.503 16:06:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63742' 00:28:57.503 killing process with pid 63742 00:28:57.503 16:06:01 -- common/autotest_common.sh@945 -- # kill 63742 00:28:57.503 16:06:01 -- common/autotest_common.sh@950 -- # wait 63742 00:29:00.036 00:29:00.036 real 0m4.964s 00:29:00.036 user 0m13.271s 00:29:00.036 sys 0m0.820s 00:29:00.036 16:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.036 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.036 ************************************ 00:29:00.036 END TEST locking_overlapped_coremask 00:29:00.036 ************************************ 00:29:00.037 16:06:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:29:00.037 16:06:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.037 16:06:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.037 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.037 ************************************ 00:29:00.037 START TEST locking_overlapped_coremask_via_rpc 00:29:00.037 ************************************ 00:29:00.037 16:06:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:29:00.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.037 16:06:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63832 00:29:00.037 16:06:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:29:00.037 16:06:03 -- event/cpu_locks.sh@149 -- # waitforlisten 63832 /var/tmp/spdk.sock 00:29:00.037 16:06:03 -- common/autotest_common.sh@819 -- # '[' -z 63832 ']' 00:29:00.037 16:06:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.037 16:06:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.037 16:06:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.037 16:06:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.037 16:06:03 -- common/autotest_common.sh@10 -- # set +x 00:29:00.037 [2024-07-22 16:06:04.026694] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:00.037 [2024-07-22 16:06:04.027157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63832 ] 00:29:00.037 [2024-07-22 16:06:04.190078] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:00.037 [2024-07-22 16:06:04.190419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.323 [2024-07-22 16:06:04.441356] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:00.323 [2024-07-22 16:06:04.442045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.323 [2024-07-22 16:06:04.442623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.323 [2024-07-22 16:06:04.442663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.698 16:06:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:01.698 16:06:05 -- common/autotest_common.sh@852 -- # return 0 00:29:01.698 16:06:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63863 00:29:01.698 16:06:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:29:01.698 16:06:05 -- event/cpu_locks.sh@153 -- # waitforlisten 63863 /var/tmp/spdk2.sock 00:29:01.698 16:06:05 -- common/autotest_common.sh@819 -- # '[' -z 63863 ']' 00:29:01.698 16:06:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:01.698 16:06:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:01.698 16:06:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:01.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:01.698 16:06:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:01.698 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:29:01.698 [2024-07-22 16:06:05.754818] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:01.698 [2024-07-22 16:06:05.755821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63863 ] 00:29:01.698 [2024-07-22 16:06:05.929353] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:01.698 [2024-07-22 16:06:05.929415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:02.265 [2024-07-22 16:06:06.453466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:02.265 [2024-07-22 16:06:06.453846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.265 [2024-07-22 16:06:06.457164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.265 [2024-07-22 16:06:06.457187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.169 16:06:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.169 16:06:08 -- common/autotest_common.sh@852 -- # return 0 00:29:04.169 16:06:08 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:29:04.169 16:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.169 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.169 16:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.169 16:06:08 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:04.169 16:06:08 -- common/autotest_common.sh@640 -- # local es=0 00:29:04.169 16:06:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:04.169 16:06:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:04.169 16:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.169 16:06:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:04.169 16:06:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.169 16:06:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:04.169 16:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.169 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.169 [2024-07-22 16:06:08.227296] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63832 has claimed it. 00:29:04.169 request: 00:29:04.169 { 00:29:04.169 "method": "framework_enable_cpumask_locks", 00:29:04.169 "req_id": 1 00:29:04.169 } 00:29:04.169 Got JSON-RPC error response 00:29:04.169 response: 00:29:04.169 { 00:29:04.169 "code": -32603, 00:29:04.169 "message": "Failed to claim CPU core: 2" 00:29:04.169 } 00:29:04.169 16:06:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:04.169 16:06:08 -- common/autotest_common.sh@643 -- # es=1 00:29:04.169 16:06:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:04.169 16:06:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:04.169 16:06:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.169 16:06:08 -- event/cpu_locks.sh@158 -- # waitforlisten 63832 /var/tmp/spdk.sock 00:29:04.169 16:06:08 -- common/autotest_common.sh@819 -- # '[' -z 63832 ']' 00:29:04.169 16:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.169 16:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:04.169 16:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.169 16:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:04.169 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.428 16:06:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.428 16:06:08 -- common/autotest_common.sh@852 -- # return 0 00:29:04.428 16:06:08 -- event/cpu_locks.sh@159 -- # waitforlisten 63863 /var/tmp/spdk2.sock 00:29:04.428 16:06:08 -- common/autotest_common.sh@819 -- # '[' -z 63863 ']' 00:29:04.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:04.428 16:06:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:04.428 16:06:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:04.428 16:06:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:04.428 16:06:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:04.428 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.687 ************************************ 00:29:04.687 END TEST locking_overlapped_coremask_via_rpc 00:29:04.687 ************************************ 00:29:04.687 16:06:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:04.687 16:06:08 -- common/autotest_common.sh@852 -- # return 0 00:29:04.687 16:06:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:29:04.687 16:06:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:29:04.687 16:06:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:29:04.687 16:06:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:29:04.687 00:29:04.687 real 0m4.811s 00:29:04.687 user 0m1.936s 00:29:04.687 sys 0m0.325s 00:29:04.687 16:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.687 16:06:08 -- common/autotest_common.sh@10 -- # set +x 00:29:04.687 16:06:08 -- event/cpu_locks.sh@174 -- # cleanup 00:29:04.687 16:06:08 -- event/cpu_locks.sh@15 -- # [[ -z 63832 ]] 00:29:04.687 16:06:08 -- event/cpu_locks.sh@15 -- # killprocess 63832 00:29:04.687 16:06:08 -- common/autotest_common.sh@926 -- # '[' -z 63832 ']' 00:29:04.687 16:06:08 -- common/autotest_common.sh@930 -- # kill -0 63832 00:29:04.687 16:06:08 -- common/autotest_common.sh@931 -- # uname 00:29:04.687 16:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:04.687 16:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63832 00:29:04.687 killing process with pid 63832 00:29:04.687 16:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:04.687 16:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:04.687 16:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63832' 00:29:04.687 16:06:08 -- common/autotest_common.sh@945 -- # kill 63832 00:29:04.687 16:06:08 -- common/autotest_common.sh@950 -- # wait 63832 00:29:07.227 16:06:11 -- event/cpu_locks.sh@16 -- # [[ -z 63863 ]] 00:29:07.227 16:06:11 -- event/cpu_locks.sh@16 -- # killprocess 63863 00:29:07.227 16:06:11 -- common/autotest_common.sh@926 -- # '[' -z 63863 ']' 00:29:07.227 16:06:11 -- common/autotest_common.sh@930 -- # kill -0 63863 00:29:07.227 16:06:11 -- common/autotest_common.sh@931 -- # uname 00:29:07.227 16:06:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:07.227 16:06:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 63863 00:29:07.227 killing process with pid 63863 00:29:07.227 16:06:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:29:07.227 16:06:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:29:07.227 16:06:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 63863' 00:29:07.227 16:06:11 -- common/autotest_common.sh@945 -- # kill 63863 00:29:07.227 16:06:11 -- common/autotest_common.sh@950 -- # wait 63863 00:29:09.755 16:06:13 -- event/cpu_locks.sh@18 -- # rm -f 00:29:09.755 16:06:13 -- event/cpu_locks.sh@1 -- # cleanup 00:29:09.755 16:06:13 -- event/cpu_locks.sh@15 -- # [[ -z 63832 ]] 00:29:09.755 16:06:13 -- event/cpu_locks.sh@15 -- # killprocess 63832 00:29:09.755 16:06:13 -- common/autotest_common.sh@926 -- # '[' -z 63832 ']' 00:29:09.755 16:06:13 -- common/autotest_common.sh@930 -- # kill -0 63832 00:29:09.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (63832) - No such process 00:29:09.755 Process with pid 63832 is not found 00:29:09.755 16:06:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 63832 is not found' 00:29:09.755 16:06:13 -- event/cpu_locks.sh@16 -- # [[ -z 63863 ]] 00:29:09.755 16:06:13 -- event/cpu_locks.sh@16 -- # killprocess 63863 00:29:09.755 Process with pid 63863 is not found 00:29:09.756 16:06:13 -- common/autotest_common.sh@926 -- # '[' -z 63863 ']' 00:29:09.756 16:06:13 -- common/autotest_common.sh@930 -- # kill -0 63863 00:29:09.756 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (63863) - No such process 00:29:09.756 16:06:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 63863 is not found' 00:29:09.756 16:06:13 -- event/cpu_locks.sh@18 -- # rm -f 00:29:09.756 ************************************ 00:29:09.756 END TEST cpu_locks 00:29:09.756 ************************************ 00:29:09.756 00:29:09.756 real 0m54.624s 00:29:09.756 user 1m31.773s 00:29:09.756 sys 0m9.094s 00:29:09.756 16:06:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.756 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.756 ************************************ 00:29:09.756 END TEST event 00:29:09.756 ************************************ 00:29:09.756 00:29:09.756 real 1m26.923s 00:29:09.756 user 2m31.893s 00:29:09.756 sys 0m13.810s 00:29:09.756 16:06:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.756 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.756 16:06:13 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:29:09.756 16:06:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:09.756 16:06:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.756 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.756 ************************************ 00:29:09.756 START TEST thread 00:29:09.756 ************************************ 00:29:09.756 16:06:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:29:09.756 * Looking for test storage... 00:29:09.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:29:09.756 16:06:13 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:09.756 16:06:13 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:09.756 16:06:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:09.756 16:06:13 -- common/autotest_common.sh@10 -- # set +x 00:29:09.756 ************************************ 00:29:09.756 START TEST thread_poller_perf 00:29:09.756 ************************************ 00:29:09.756 16:06:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:09.756 [2024-07-22 16:06:13.940063] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:09.756 [2024-07-22 16:06:13.940218] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64047 ] 00:29:10.014 [2024-07-22 16:06:14.106686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.271 [2024-07-22 16:06:14.415552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.271 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:29:11.646 ====================================== 00:29:11.646 busy:2216589818 (cyc) 00:29:11.646 total_run_count: 310000 00:29:11.646 tsc_hz: 2200000000 (cyc) 00:29:11.646 ====================================== 00:29:11.646 poller_cost: 7150 (cyc), 3250 (nsec) 00:29:11.646 00:29:11.646 real 0m1.945s 00:29:11.646 user 0m1.714s 00:29:11.646 sys 0m0.130s 00:29:11.646 16:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.646 16:06:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.646 ************************************ 00:29:11.646 END TEST thread_poller_perf 00:29:11.646 ************************************ 00:29:11.646 16:06:15 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:11.646 16:06:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:11.646 16:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.646 16:06:15 -- common/autotest_common.sh@10 -- # set +x 00:29:11.647 ************************************ 00:29:11.647 START TEST thread_poller_perf 00:29:11.647 ************************************ 00:29:11.647 16:06:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:11.905 [2024-07-22 16:06:15.951572] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:11.905 [2024-07-22 16:06:15.951750] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64089 ] 00:29:11.905 [2024-07-22 16:06:16.129780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.163 [2024-07-22 16:06:16.431125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.163 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:29:14.067 ====================================== 00:29:14.067 busy:2204721907 (cyc) 00:29:14.067 total_run_count: 3850000 00:29:14.067 tsc_hz: 2200000000 (cyc) 00:29:14.067 ====================================== 00:29:14.067 poller_cost: 572 (cyc), 260 (nsec) 00:29:14.067 ************************************ 00:29:14.067 END TEST thread_poller_perf 00:29:14.067 ************************************ 00:29:14.067 00:29:14.067 real 0m1.993s 00:29:14.067 user 0m1.757s 00:29:14.067 sys 0m0.135s 00:29:14.067 16:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.067 16:06:17 -- common/autotest_common.sh@10 -- # set +x 00:29:14.067 16:06:17 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:29:14.067 16:06:17 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:29:14.067 16:06:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:14.067 16:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.067 16:06:17 -- common/autotest_common.sh@10 -- # set +x 00:29:14.067 ************************************ 00:29:14.067 START TEST thread_spdk_lock 00:29:14.067 ************************************ 00:29:14.067 16:06:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:29:14.067 [2024-07-22 16:06:18.005467] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:14.067 [2024-07-22 16:06:18.005653] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64131 ] 00:29:14.067 [2024-07-22 16:06:18.178600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:14.340 [2024-07-22 16:06:18.495104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.340 [2024-07-22 16:06:18.495119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.906 [2024-07-22 16:06:19.053220] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:29:14.906 [2024-07-22 16:06:19.053358] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:29:14.906 [2024-07-22 16:06:19.053402] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5a2abec78500 00:29:14.906 [2024-07-22 16:06:19.062320] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:29:14.906 [2024-07-22 16:06:19.062420] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:29:14.906 [2024-07-22 16:06:19.062478] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:29:15.473 Starting test contend 00:29:15.473 Worker Delay Wait us Hold us Total us 00:29:15.473 0 3 104096 202295 306392 00:29:15.473 1 5 43684 303992 347676 00:29:15.473 PASS test contend 00:29:15.473 Starting test hold_by_poller 00:29:15.473 PASS test hold_by_poller 00:29:15.473 Starting test hold_by_message 00:29:15.473 PASS test hold_by_message 00:29:15.473 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:29:15.473 100014 assertions passed 00:29:15.473 0 assertions failed 00:29:15.473 00:29:15.473 real 0m1.611s 00:29:15.473 user 0m1.940s 00:29:15.473 sys 0m0.140s 00:29:15.473 16:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.473 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.473 ************************************ 00:29:15.473 END TEST thread_spdk_lock 00:29:15.473 ************************************ 00:29:15.473 ************************************ 00:29:15.473 END TEST thread 00:29:15.473 ************************************ 00:29:15.473 00:29:15.473 real 0m5.808s 00:29:15.473 user 0m5.485s 00:29:15.473 sys 0m0.582s 00:29:15.473 16:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.473 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.473 16:06:19 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:29:15.473 16:06:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:15.473 16:06:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:15.473 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.473 ************************************ 00:29:15.473 START TEST accel 00:29:15.473 ************************************ 00:29:15.473 16:06:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:29:15.473 * Looking for test storage... 00:29:15.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:29:15.731 16:06:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:29:15.731 16:06:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:29:15.731 16:06:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:15.731 16:06:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=64214 00:29:15.731 16:06:19 -- accel/accel.sh@60 -- # waitforlisten 64214 00:29:15.731 16:06:19 -- common/autotest_common.sh@819 -- # '[' -z 64214 ']' 00:29:15.731 16:06:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.731 16:06:19 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:29:15.731 16:06:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:15.731 16:06:19 -- accel/accel.sh@58 -- # build_accel_config 00:29:15.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.731 16:06:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.731 16:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:15.731 16:06:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:15.731 16:06:19 -- common/autotest_common.sh@10 -- # set +x 00:29:15.731 16:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:15.731 16:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:15.731 16:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:15.731 16:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:15.731 16:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:29:15.731 16:06:19 -- accel/accel.sh@42 -- # jq -r . 00:29:15.731 [2024-07-22 16:06:19.845932] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:15.731 [2024-07-22 16:06:19.846143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64214 ] 00:29:15.990 [2024-07-22 16:06:20.040304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.249 [2024-07-22 16:06:20.347789] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:16.249 [2024-07-22 16:06:20.348065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.632 16:06:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:17.632 16:06:21 -- common/autotest_common.sh@852 -- # return 0 00:29:17.632 16:06:21 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:29:17.632 16:06:21 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:29:17.632 16:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:17.632 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:17.632 16:06:21 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:29:17.632 16:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # IFS== 00:29:17.633 16:06:21 -- accel/accel.sh@64 -- # read -r opc module 00:29:17.633 16:06:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:29:17.633 16:06:21 -- accel/accel.sh@67 -- # killprocess 64214 00:29:17.633 16:06:21 -- common/autotest_common.sh@926 -- # '[' -z 64214 ']' 00:29:17.633 16:06:21 -- common/autotest_common.sh@930 -- # kill -0 64214 00:29:17.633 16:06:21 -- common/autotest_common.sh@931 -- # uname 00:29:17.633 16:06:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:17.633 16:06:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 64214 00:29:17.633 16:06:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:17.633 16:06:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:17.633 16:06:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 64214' 00:29:17.633 killing process with pid 64214 00:29:17.633 16:06:21 -- common/autotest_common.sh@945 -- # kill 64214 00:29:17.633 16:06:21 -- common/autotest_common.sh@950 -- # wait 64214 00:29:20.160 16:06:24 -- accel/accel.sh@68 -- # trap - ERR 00:29:20.160 16:06:24 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:29:20.160 16:06:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:20.160 16:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.160 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.160 16:06:24 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:29:20.160 16:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:29:20.160 16:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:29:20.160 16:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:20.160 16:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:20.160 16:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:20.160 16:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:20.160 16:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:20.160 16:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:29:20.160 16:06:24 -- accel/accel.sh@42 -- # jq -r . 00:29:20.160 16:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.160 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.160 16:06:24 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:29:20.160 16:06:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:20.160 16:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:20.160 16:06:24 -- common/autotest_common.sh@10 -- # set +x 00:29:20.417 ************************************ 00:29:20.417 START TEST accel_missing_filename 00:29:20.417 ************************************ 00:29:20.417 16:06:24 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:29:20.417 16:06:24 -- common/autotest_common.sh@640 -- # local es=0 00:29:20.417 16:06:24 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:29:20.417 16:06:24 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:29:20.417 16:06:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:20.417 16:06:24 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:29:20.417 16:06:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:20.417 16:06:24 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:29:20.417 16:06:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:29:20.417 16:06:24 -- accel/accel.sh@12 -- # build_accel_config 00:29:20.417 16:06:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:20.417 16:06:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:20.417 16:06:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:20.417 16:06:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:20.417 16:06:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:20.417 16:06:24 -- accel/accel.sh@41 -- # local IFS=, 00:29:20.417 16:06:24 -- accel/accel.sh@42 -- # jq -r . 00:29:20.417 [2024-07-22 16:06:24.480101] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:20.417 [2024-07-22 16:06:24.480287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64297 ] 00:29:20.417 [2024-07-22 16:06:24.651585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.673 [2024-07-22 16:06:24.916252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.930 [2024-07-22 16:06:25.146455] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:21.495 [2024-07-22 16:06:25.706873] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:29:22.060 A filename is required. 00:29:22.060 16:06:26 -- common/autotest_common.sh@643 -- # es=234 00:29:22.060 16:06:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:22.060 16:06:26 -- common/autotest_common.sh@652 -- # es=106 00:29:22.060 16:06:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:22.060 16:06:26 -- common/autotest_common.sh@660 -- # es=1 00:29:22.060 16:06:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:22.060 00:29:22.060 real 0m1.727s 00:29:22.060 user 0m1.395s 00:29:22.060 sys 0m0.240s 00:29:22.060 16:06:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.060 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.060 ************************************ 00:29:22.060 END TEST accel_missing_filename 00:29:22.060 ************************************ 00:29:22.060 16:06:26 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:29:22.060 16:06:26 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:29:22.060 16:06:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.060 16:06:26 -- common/autotest_common.sh@10 -- # set +x 00:29:22.060 ************************************ 00:29:22.060 START TEST accel_compress_verify 00:29:22.060 ************************************ 00:29:22.060 16:06:26 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:29:22.060 16:06:26 -- common/autotest_common.sh@640 -- # local es=0 00:29:22.060 16:06:26 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:29:22.060 16:06:26 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:29:22.060 16:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.060 16:06:26 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:29:22.060 16:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:22.060 16:06:26 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:29:22.060 16:06:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:29:22.060 16:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:29:22.060 16:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:22.060 16:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:22.060 16:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:22.060 16:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:22.060 16:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:22.060 16:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:29:22.060 16:06:26 -- accel/accel.sh@42 -- # jq -r . 00:29:22.060 [2024-07-22 16:06:26.257203] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:22.060 [2024-07-22 16:06:26.257374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64334 ] 00:29:22.323 [2024-07-22 16:06:26.425507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.598 [2024-07-22 16:06:26.717486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.856 [2024-07-22 16:06:26.944630] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:23.422 [2024-07-22 16:06:27.501190] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:29:23.681 00:29:23.681 Compression does not support the verify option, aborting. 00:29:23.681 16:06:27 -- common/autotest_common.sh@643 -- # es=161 00:29:23.681 16:06:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:23.681 16:06:27 -- common/autotest_common.sh@652 -- # es=33 00:29:23.681 16:06:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:23.681 16:06:27 -- common/autotest_common.sh@660 -- # es=1 00:29:23.681 16:06:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:23.681 00:29:23.681 real 0m1.732s 00:29:23.681 user 0m1.384s 00:29:23.681 sys 0m0.254s 00:29:23.681 16:06:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.681 16:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.681 ************************************ 00:29:23.681 END TEST accel_compress_verify 00:29:23.681 ************************************ 00:29:23.939 16:06:27 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:29:23.939 16:06:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:23.939 16:06:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.939 16:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:23.939 ************************************ 00:29:23.939 START TEST accel_wrong_workload 00:29:23.939 ************************************ 00:29:23.939 16:06:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:29:23.939 16:06:27 -- common/autotest_common.sh@640 -- # local es=0 00:29:23.939 16:06:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:29:23.939 16:06:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:29:23.939 16:06:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.939 16:06:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:29:23.939 16:06:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.939 16:06:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:29:23.939 16:06:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:29:23.939 16:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:29:23.939 16:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:23.939 16:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:23.939 16:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:23.939 16:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:23.939 16:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:23.939 16:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:29:23.939 16:06:28 -- accel/accel.sh@42 -- # jq -r . 00:29:23.939 Unsupported workload type: foobar 00:29:23.939 [2024-07-22 16:06:28.037155] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:29:23.939 accel_perf options: 00:29:23.939 [-h help message] 00:29:23.939 [-q queue depth per core] 00:29:23.939 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:29:23.939 [-T number of threads per core 00:29:23.939 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:29:23.939 [-t time in seconds] 00:29:23.939 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:29:23.939 [ dif_verify, , dif_generate, dif_generate_copy 00:29:23.939 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:29:23.939 [-l for compress/decompress workloads, name of uncompressed input file 00:29:23.939 [-S for crc32c workload, use this seed value (default 0) 00:29:23.939 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:29:23.939 [-f for fill workload, use this BYTE value (default 255) 00:29:23.939 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:29:23.939 [-y verify result if this switch is on] 00:29:23.939 [-a tasks to allocate per core (default: same value as -q)] 00:29:23.939 Can be used to spread operations across a wider range of memory. 00:29:23.939 16:06:28 -- common/autotest_common.sh@643 -- # es=1 00:29:23.939 16:06:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:23.939 16:06:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:23.939 16:06:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:23.939 00:29:23.939 real 0m0.065s 00:29:23.939 user 0m0.042s 00:29:23.940 sys 0m0.033s 00:29:23.940 16:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.940 ************************************ 00:29:23.940 END TEST accel_wrong_workload 00:29:23.940 16:06:28 -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 16:06:28 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:29:23.940 16:06:28 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:29:23.940 16:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.940 16:06:28 -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 START TEST accel_negative_buffers 00:29:23.940 ************************************ 00:29:23.940 16:06:28 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:29:23.940 16:06:28 -- common/autotest_common.sh@640 -- # local es=0 00:29:23.940 16:06:28 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:29:23.940 16:06:28 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:29:23.940 16:06:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.940 16:06:28 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:29:23.940 16:06:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:23.940 16:06:28 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:29:23.940 16:06:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:29:23.940 16:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:29:23.940 16:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:23.940 16:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:23.940 16:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:23.940 16:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:23.940 16:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:23.940 16:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:29:23.940 16:06:28 -- accel/accel.sh@42 -- # jq -r . 00:29:23.940 -x option must be non-negative. 00:29:23.940 [2024-07-22 16:06:28.163617] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:29:23.940 accel_perf options: 00:29:23.940 [-h help message] 00:29:23.940 [-q queue depth per core] 00:29:23.940 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:29:23.940 [-T number of threads per core 00:29:23.940 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:29:23.940 [-t time in seconds] 00:29:23.940 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:29:23.940 [ dif_verify, , dif_generate, dif_generate_copy 00:29:23.940 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:29:23.940 [-l for compress/decompress workloads, name of uncompressed input file 00:29:23.940 [-S for crc32c workload, use this seed value (default 0) 00:29:23.940 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:29:23.940 [-f for fill workload, use this BYTE value (default 255) 00:29:23.940 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:29:23.940 [-y verify result if this switch is on] 00:29:23.940 [-a tasks to allocate per core (default: same value as -q)] 00:29:23.940 Can be used to spread operations across a wider range of memory. 00:29:23.940 16:06:28 -- common/autotest_common.sh@643 -- # es=1 00:29:23.940 16:06:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:23.940 16:06:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:23.940 16:06:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:23.940 00:29:23.940 real 0m0.074s 00:29:23.940 user 0m0.040s 00:29:23.940 sys 0m0.043s 00:29:23.940 16:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.940 16:06:28 -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 END TEST accel_negative_buffers 00:29:23.940 ************************************ 00:29:24.198 16:06:28 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:29:24.198 16:06:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:24.198 16:06:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.198 16:06:28 -- common/autotest_common.sh@10 -- # set +x 00:29:24.198 ************************************ 00:29:24.198 START TEST accel_crc32c 00:29:24.198 ************************************ 00:29:24.199 16:06:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:29:24.199 16:06:28 -- accel/accel.sh@16 -- # local accel_opc 00:29:24.199 16:06:28 -- accel/accel.sh@17 -- # local accel_module 00:29:24.199 16:06:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:29:24.199 16:06:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:29:24.199 16:06:28 -- accel/accel.sh@12 -- # build_accel_config 00:29:24.199 16:06:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:24.199 16:06:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:24.199 16:06:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:24.199 16:06:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:24.199 16:06:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:24.199 16:06:28 -- accel/accel.sh@41 -- # local IFS=, 00:29:24.199 16:06:28 -- accel/accel.sh@42 -- # jq -r . 00:29:24.199 [2024-07-22 16:06:28.291235] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:24.199 [2024-07-22 16:06:28.291407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64412 ] 00:29:24.457 [2024-07-22 16:06:28.473749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.715 [2024-07-22 16:06:28.775694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.244 16:06:30 -- accel/accel.sh@18 -- # out=' 00:29:27.244 SPDK Configuration: 00:29:27.244 Core mask: 0x1 00:29:27.244 00:29:27.244 Accel Perf Configuration: 00:29:27.244 Workload Type: crc32c 00:29:27.244 CRC-32C seed: 32 00:29:27.244 Transfer size: 4096 bytes 00:29:27.244 Vector count 1 00:29:27.244 Module: software 00:29:27.244 Queue depth: 32 00:29:27.244 Allocate depth: 32 00:29:27.244 # threads/core: 1 00:29:27.244 Run time: 1 seconds 00:29:27.244 Verify: Yes 00:29:27.244 00:29:27.244 Running for 1 seconds... 00:29:27.244 00:29:27.244 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:27.244 ------------------------------------------------------------------------------------ 00:29:27.244 0,0 383104/s 1496 MiB/s 0 0 00:29:27.244 ==================================================================================== 00:29:27.244 Total 383104/s 1496 MiB/s 0 0' 00:29:27.244 16:06:30 -- accel/accel.sh@20 -- # IFS=: 00:29:27.244 16:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:29:27.244 16:06:30 -- accel/accel.sh@20 -- # read -r var val 00:29:27.244 16:06:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:29:27.244 16:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:29:27.244 16:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:27.244 16:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:27.244 16:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:27.244 16:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:27.244 16:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:27.244 16:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:29:27.244 16:06:30 -- accel/accel.sh@42 -- # jq -r . 00:29:27.244 [2024-07-22 16:06:31.024160] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:27.244 [2024-07-22 16:06:31.024313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64448 ] 00:29:27.244 [2024-07-22 16:06:31.192855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.244 [2024-07-22 16:06:31.490546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=0x1 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=crc32c 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=32 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=software 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@23 -- # accel_module=software 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=32 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=32 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=1 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val=Yes 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:27.503 16:06:31 -- accel/accel.sh@21 -- # val= 00:29:27.503 16:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # IFS=: 00:29:27.503 16:06:31 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@21 -- # val= 00:29:30.033 16:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # IFS=: 00:29:30.033 16:06:33 -- accel/accel.sh@20 -- # read -r var val 00:29:30.033 16:06:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:30.033 16:06:33 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:29:30.033 16:06:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:30.033 00:29:30.033 real 0m5.475s 00:29:30.033 user 0m4.783s 00:29:30.033 sys 0m0.508s 00:29:30.033 16:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:30.033 ************************************ 00:29:30.033 16:06:33 -- common/autotest_common.sh@10 -- # set +x 00:29:30.033 END TEST accel_crc32c 00:29:30.033 ************************************ 00:29:30.033 16:06:33 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:29:30.033 16:06:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:30.033 16:06:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:30.033 16:06:33 -- common/autotest_common.sh@10 -- # set +x 00:29:30.033 ************************************ 00:29:30.033 START TEST accel_crc32c_C2 00:29:30.033 ************************************ 00:29:30.033 16:06:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:29:30.033 16:06:33 -- accel/accel.sh@16 -- # local accel_opc 00:29:30.033 16:06:33 -- accel/accel.sh@17 -- # local accel_module 00:29:30.033 16:06:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:29:30.033 16:06:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:29:30.033 16:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:29:30.033 16:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:30.033 16:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:30.033 16:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:30.033 16:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:30.033 16:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:30.033 16:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:29:30.033 16:06:33 -- accel/accel.sh@42 -- # jq -r . 00:29:30.033 [2024-07-22 16:06:33.828331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:30.033 [2024-07-22 16:06:33.828557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64490 ] 00:29:30.033 [2024-07-22 16:06:34.019392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.291 [2024-07-22 16:06:34.318594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.819 16:06:36 -- accel/accel.sh@18 -- # out=' 00:29:32.819 SPDK Configuration: 00:29:32.819 Core mask: 0x1 00:29:32.819 00:29:32.819 Accel Perf Configuration: 00:29:32.819 Workload Type: crc32c 00:29:32.819 CRC-32C seed: 0 00:29:32.819 Transfer size: 4096 bytes 00:29:32.819 Vector count 2 00:29:32.819 Module: software 00:29:32.819 Queue depth: 32 00:29:32.819 Allocate depth: 32 00:29:32.819 # threads/core: 1 00:29:32.819 Run time: 1 seconds 00:29:32.819 Verify: Yes 00:29:32.819 00:29:32.819 Running for 1 seconds... 00:29:32.819 00:29:32.819 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:32.819 ------------------------------------------------------------------------------------ 00:29:32.819 0,0 306880/s 2397 MiB/s 0 0 00:29:32.819 ==================================================================================== 00:29:32.819 Total 306880/s 1198 MiB/s 0 0' 00:29:32.819 16:06:36 -- accel/accel.sh@20 -- # IFS=: 00:29:32.819 16:06:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:29:32.819 16:06:36 -- accel/accel.sh@20 -- # read -r var val 00:29:32.819 16:06:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:29:32.819 16:06:36 -- accel/accel.sh@12 -- # build_accel_config 00:29:32.819 16:06:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:32.819 16:06:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:32.819 16:06:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:32.819 16:06:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:32.819 16:06:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:32.819 16:06:36 -- accel/accel.sh@41 -- # local IFS=, 00:29:32.819 16:06:36 -- accel/accel.sh@42 -- # jq -r . 00:29:32.819 [2024-07-22 16:06:36.581761] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:32.819 [2024-07-22 16:06:36.582038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64527 ] 00:29:32.819 [2024-07-22 16:06:36.755260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.819 [2024-07-22 16:06:37.022216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=0x1 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=crc32c 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=0 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=software 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@23 -- # accel_module=software 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=32 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=32 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=1 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val=Yes 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:33.077 16:06:37 -- accel/accel.sh@21 -- # val= 00:29:33.077 16:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # IFS=: 00:29:33.077 16:06:37 -- accel/accel.sh@20 -- # read -r var val 00:29:34.974 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:34.974 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:34.974 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:34.974 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:34.974 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:34.974 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:34.974 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:34.974 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:35.231 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:35.231 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:35.231 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:35.231 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:35.231 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:35.231 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:35.231 16:06:39 -- accel/accel.sh@21 -- # val= 00:29:35.231 16:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # IFS=: 00:29:35.231 16:06:39 -- accel/accel.sh@20 -- # read -r var val 00:29:35.231 16:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:35.231 16:06:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:29:35.231 16:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:35.231 00:29:35.231 real 0m5.487s 00:29:35.231 user 0m4.785s 00:29:35.231 sys 0m0.517s 00:29:35.231 ************************************ 00:29:35.231 END TEST accel_crc32c_C2 00:29:35.231 ************************************ 00:29:35.231 16:06:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.231 16:06:39 -- common/autotest_common.sh@10 -- # set +x 00:29:35.231 16:06:39 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:29:35.231 16:06:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:35.231 16:06:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:35.231 16:06:39 -- common/autotest_common.sh@10 -- # set +x 00:29:35.231 ************************************ 00:29:35.231 START TEST accel_copy 00:29:35.231 ************************************ 00:29:35.231 16:06:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:29:35.231 16:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:29:35.231 16:06:39 -- accel/accel.sh@17 -- # local accel_module 00:29:35.231 16:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:29:35.231 16:06:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:29:35.231 16:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:29:35.231 16:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:35.231 16:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:35.231 16:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:35.231 16:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:35.231 16:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:35.231 16:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:29:35.231 16:06:39 -- accel/accel.sh@42 -- # jq -r . 00:29:35.231 [2024-07-22 16:06:39.346612] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:35.231 [2024-07-22 16:06:39.346796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64574 ] 00:29:35.489 [2024-07-22 16:06:39.520693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.748 [2024-07-22 16:06:39.792808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.750 16:06:42 -- accel/accel.sh@18 -- # out=' 00:29:37.750 SPDK Configuration: 00:29:37.750 Core mask: 0x1 00:29:37.750 00:29:37.750 Accel Perf Configuration: 00:29:37.750 Workload Type: copy 00:29:37.750 Transfer size: 4096 bytes 00:29:37.750 Vector count 1 00:29:37.750 Module: software 00:29:37.750 Queue depth: 32 00:29:37.750 Allocate depth: 32 00:29:37.750 # threads/core: 1 00:29:37.750 Run time: 1 seconds 00:29:37.750 Verify: Yes 00:29:37.750 00:29:37.750 Running for 1 seconds... 00:29:37.750 00:29:37.750 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:37.750 ------------------------------------------------------------------------------------ 00:29:37.750 0,0 233760/s 913 MiB/s 0 0 00:29:37.750 ==================================================================================== 00:29:37.750 Total 233760/s 913 MiB/s 0 0' 00:29:37.750 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:37.750 16:06:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:29:37.750 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:37.750 16:06:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:29:37.750 16:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:29:37.750 16:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:37.750 16:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:37.750 16:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:38.007 16:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:38.008 16:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:38.008 16:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:29:38.008 16:06:42 -- accel/accel.sh@42 -- # jq -r . 00:29:38.008 [2024-07-22 16:06:42.064830] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:38.008 [2024-07-22 16:06:42.065060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64604 ] 00:29:38.008 [2024-07-22 16:06:42.246428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.266 [2024-07-22 16:06:42.525032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=0x1 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=copy 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=software 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@23 -- # accel_module=software 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=32 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=32 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=1 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val=Yes 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:38.524 16:06:42 -- accel/accel.sh@21 -- # val= 00:29:38.524 16:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # IFS=: 00:29:38.524 16:06:42 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@21 -- # val= 00:29:41.064 16:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # IFS=: 00:29:41.064 16:06:44 -- accel/accel.sh@20 -- # read -r var val 00:29:41.064 16:06:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:41.064 16:06:44 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:29:41.064 16:06:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:41.064 00:29:41.064 real 0m5.495s 00:29:41.064 user 0m4.774s 00:29:41.065 sys 0m0.537s 00:29:41.065 16:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:41.065 ************************************ 00:29:41.065 END TEST accel_copy 00:29:41.065 ************************************ 00:29:41.065 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.065 16:06:44 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:41.065 16:06:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:41.065 16:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:41.065 16:06:44 -- common/autotest_common.sh@10 -- # set +x 00:29:41.065 ************************************ 00:29:41.065 START TEST accel_fill 00:29:41.065 ************************************ 00:29:41.065 16:06:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:41.065 16:06:44 -- accel/accel.sh@16 -- # local accel_opc 00:29:41.065 16:06:44 -- accel/accel.sh@17 -- # local accel_module 00:29:41.065 16:06:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:41.065 16:06:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:41.065 16:06:44 -- accel/accel.sh@12 -- # build_accel_config 00:29:41.065 16:06:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:41.065 16:06:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:41.065 16:06:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:41.065 16:06:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:41.065 16:06:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:41.065 16:06:44 -- accel/accel.sh@41 -- # local IFS=, 00:29:41.065 16:06:44 -- accel/accel.sh@42 -- # jq -r . 00:29:41.065 [2024-07-22 16:06:44.889733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:41.065 [2024-07-22 16:06:44.889904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64652 ] 00:29:41.065 [2024-07-22 16:06:45.064134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.065 [2024-07-22 16:06:45.334469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.604 16:06:47 -- accel/accel.sh@18 -- # out=' 00:29:43.604 SPDK Configuration: 00:29:43.604 Core mask: 0x1 00:29:43.604 00:29:43.604 Accel Perf Configuration: 00:29:43.604 Workload Type: fill 00:29:43.604 Fill pattern: 0x80 00:29:43.604 Transfer size: 4096 bytes 00:29:43.604 Vector count 1 00:29:43.604 Module: software 00:29:43.604 Queue depth: 64 00:29:43.604 Allocate depth: 64 00:29:43.604 # threads/core: 1 00:29:43.604 Run time: 1 seconds 00:29:43.604 Verify: Yes 00:29:43.604 00:29:43.604 Running for 1 seconds... 00:29:43.604 00:29:43.604 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:43.604 ------------------------------------------------------------------------------------ 00:29:43.604 0,0 374912/s 1464 MiB/s 0 0 00:29:43.604 ==================================================================================== 00:29:43.604 Total 374912/s 1464 MiB/s 0 0' 00:29:43.604 16:06:47 -- accel/accel.sh@20 -- # IFS=: 00:29:43.604 16:06:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:43.604 16:06:47 -- accel/accel.sh@20 -- # read -r var val 00:29:43.604 16:06:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:43.604 16:06:47 -- accel/accel.sh@12 -- # build_accel_config 00:29:43.604 16:06:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:43.604 16:06:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:43.604 16:06:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:43.604 16:06:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:43.604 16:06:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:43.604 16:06:47 -- accel/accel.sh@41 -- # local IFS=, 00:29:43.604 16:06:47 -- accel/accel.sh@42 -- # jq -r . 00:29:43.604 [2024-07-22 16:06:47.611900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:43.604 [2024-07-22 16:06:47.612064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64688 ] 00:29:43.604 [2024-07-22 16:06:47.778439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.862 [2024-07-22 16:06:48.039053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.120 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.120 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.120 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.120 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.120 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.120 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.120 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.120 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.120 16:06:48 -- accel/accel.sh@21 -- # val=0x1 00:29:44.120 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=fill 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@24 -- # accel_opc=fill 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=0x80 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=software 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@23 -- # accel_module=software 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=64 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=64 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=1 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val=Yes 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:44.121 16:06:48 -- accel/accel.sh@21 -- # val= 00:29:44.121 16:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # IFS=: 00:29:44.121 16:06:48 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@21 -- # val= 00:29:46.022 16:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # IFS=: 00:29:46.022 16:06:50 -- accel/accel.sh@20 -- # read -r var val 00:29:46.022 16:06:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:46.022 16:06:50 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:29:46.022 16:06:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:46.022 00:29:46.022 real 0m5.425s 00:29:46.022 user 0m4.769s 00:29:46.022 sys 0m0.471s 00:29:46.022 16:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.022 ************************************ 00:29:46.022 END TEST accel_fill 00:29:46.022 ************************************ 00:29:46.022 16:06:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.280 16:06:50 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:29:46.280 16:06:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:46.280 16:06:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.280 16:06:50 -- common/autotest_common.sh@10 -- # set +x 00:29:46.280 ************************************ 00:29:46.280 START TEST accel_copy_crc32c 00:29:46.280 ************************************ 00:29:46.280 16:06:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:29:46.280 16:06:50 -- accel/accel.sh@16 -- # local accel_opc 00:29:46.280 16:06:50 -- accel/accel.sh@17 -- # local accel_module 00:29:46.280 16:06:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:29:46.280 16:06:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:29:46.280 16:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:29:46.280 16:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:46.280 16:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:46.280 16:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:46.280 16:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:46.280 16:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:46.280 16:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:29:46.280 16:06:50 -- accel/accel.sh@42 -- # jq -r . 00:29:46.280 [2024-07-22 16:06:50.364809] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:46.280 [2024-07-22 16:06:50.365625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64736 ] 00:29:46.280 [2024-07-22 16:06:50.547719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.847 [2024-07-22 16:06:50.811926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.386 16:06:53 -- accel/accel.sh@18 -- # out=' 00:29:49.386 SPDK Configuration: 00:29:49.386 Core mask: 0x1 00:29:49.386 00:29:49.386 Accel Perf Configuration: 00:29:49.386 Workload Type: copy_crc32c 00:29:49.386 CRC-32C seed: 0 00:29:49.386 Vector size: 4096 bytes 00:29:49.386 Transfer size: 4096 bytes 00:29:49.386 Vector count 1 00:29:49.386 Module: software 00:29:49.386 Queue depth: 32 00:29:49.386 Allocate depth: 32 00:29:49.386 # threads/core: 1 00:29:49.386 Run time: 1 seconds 00:29:49.386 Verify: Yes 00:29:49.386 00:29:49.386 Running for 1 seconds... 00:29:49.386 00:29:49.386 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:49.386 ------------------------------------------------------------------------------------ 00:29:49.386 0,0 191040/s 746 MiB/s 0 0 00:29:49.386 ==================================================================================== 00:29:49.386 Total 191040/s 746 MiB/s 0 0' 00:29:49.386 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.386 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.386 16:06:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:29:49.386 16:06:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:29:49.386 16:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:29:49.386 16:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:49.386 16:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:49.386 16:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:49.386 16:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:49.386 16:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:49.386 16:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:29:49.386 16:06:53 -- accel/accel.sh@42 -- # jq -r . 00:29:49.386 [2024-07-22 16:06:53.085583] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:49.386 [2024-07-22 16:06:53.085748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64763 ] 00:29:49.386 [2024-07-22 16:06:53.251053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.386 [2024-07-22 16:06:53.517245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=0x1 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=0 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=software 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@23 -- # accel_module=software 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=32 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=32 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=1 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val=Yes 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:49.644 16:06:53 -- accel/accel.sh@21 -- # val= 00:29:49.644 16:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # IFS=: 00:29:49.644 16:06:53 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@21 -- # val= 00:29:51.545 16:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # IFS=: 00:29:51.545 16:06:55 -- accel/accel.sh@20 -- # read -r var val 00:29:51.545 16:06:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:51.546 16:06:55 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:29:51.546 16:06:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:51.546 00:29:51.546 real 0m5.449s 00:29:51.546 user 0m4.747s 00:29:51.546 sys 0m0.514s 00:29:51.546 16:06:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.546 16:06:55 -- common/autotest_common.sh@10 -- # set +x 00:29:51.546 ************************************ 00:29:51.546 END TEST accel_copy_crc32c 00:29:51.546 ************************************ 00:29:51.546 16:06:55 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:29:51.546 16:06:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:51.546 16:06:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.546 16:06:55 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 ************************************ 00:29:51.804 START TEST accel_copy_crc32c_C2 00:29:51.804 ************************************ 00:29:51.804 16:06:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:29:51.804 16:06:55 -- accel/accel.sh@16 -- # local accel_opc 00:29:51.804 16:06:55 -- accel/accel.sh@17 -- # local accel_module 00:29:51.804 16:06:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:29:51.804 16:06:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:29:51.804 16:06:55 -- accel/accel.sh@12 -- # build_accel_config 00:29:51.804 16:06:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:51.804 16:06:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:51.804 16:06:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:51.804 16:06:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:51.804 16:06:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:51.804 16:06:55 -- accel/accel.sh@41 -- # local IFS=, 00:29:51.804 16:06:55 -- accel/accel.sh@42 -- # jq -r . 00:29:51.804 [2024-07-22 16:06:55.865306] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:51.804 [2024-07-22 16:06:55.865518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64814 ] 00:29:51.804 [2024-07-22 16:06:56.051660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.063 [2024-07-22 16:06:56.319842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.620 16:06:58 -- accel/accel.sh@18 -- # out=' 00:29:54.620 SPDK Configuration: 00:29:54.620 Core mask: 0x1 00:29:54.620 00:29:54.620 Accel Perf Configuration: 00:29:54.620 Workload Type: copy_crc32c 00:29:54.620 CRC-32C seed: 0 00:29:54.620 Vector size: 4096 bytes 00:29:54.620 Transfer size: 8192 bytes 00:29:54.620 Vector count 2 00:29:54.620 Module: software 00:29:54.620 Queue depth: 32 00:29:54.620 Allocate depth: 32 00:29:54.620 # threads/core: 1 00:29:54.620 Run time: 1 seconds 00:29:54.620 Verify: Yes 00:29:54.620 00:29:54.620 Running for 1 seconds... 00:29:54.620 00:29:54.620 Core,Thread Transfers Bandwidth Failed Miscompares 00:29:54.620 ------------------------------------------------------------------------------------ 00:29:54.620 0,0 140128/s 1094 MiB/s 0 0 00:29:54.620 ==================================================================================== 00:29:54.620 Total 140128/s 547 MiB/s 0 0' 00:29:54.620 16:06:58 -- accel/accel.sh@20 -- # IFS=: 00:29:54.620 16:06:58 -- accel/accel.sh@20 -- # read -r var val 00:29:54.620 16:06:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:29:54.620 16:06:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:29:54.620 16:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:29:54.620 16:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:54.620 16:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:54.620 16:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:54.620 16:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:54.620 16:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:54.620 16:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:29:54.620 16:06:58 -- accel/accel.sh@42 -- # jq -r . 00:29:54.620 [2024-07-22 16:06:58.604611] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:54.620 [2024-07-22 16:06:58.604843] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64846 ] 00:29:54.620 [2024-07-22 16:06:58.791050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.879 [2024-07-22 16:06:59.053195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=0x1 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=copy_crc32c 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=0 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val='8192 bytes' 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=software 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@23 -- # accel_module=software 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=32 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=32 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=1 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val=Yes 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:55.138 16:06:59 -- accel/accel.sh@21 -- # val= 00:29:55.138 16:06:59 -- accel/accel.sh@22 -- # case "$var" in 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # IFS=: 00:29:55.138 16:06:59 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 16:07:01 -- accel/accel.sh@21 -- # val= 00:29:57.040 16:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # IFS=: 00:29:57.040 16:07:01 -- accel/accel.sh@20 -- # read -r var val 00:29:57.040 ************************************ 00:29:57.040 END TEST accel_copy_crc32c_C2 00:29:57.040 ************************************ 00:29:57.040 16:07:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:29:57.040 16:07:01 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:29:57.040 16:07:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:57.040 00:29:57.040 real 0m5.414s 00:29:57.040 user 0m4.698s 00:29:57.040 sys 0m0.547s 00:29:57.040 16:07:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.040 16:07:01 -- common/autotest_common.sh@10 -- # set +x 00:29:57.040 16:07:01 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:29:57.040 16:07:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:57.040 16:07:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:57.040 16:07:01 -- common/autotest_common.sh@10 -- # set +x 00:29:57.040 ************************************ 00:29:57.040 START TEST accel_dualcast 00:29:57.040 ************************************ 00:29:57.040 16:07:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:29:57.040 16:07:01 -- accel/accel.sh@16 -- # local accel_opc 00:29:57.040 16:07:01 -- accel/accel.sh@17 -- # local accel_module 00:29:57.040 16:07:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:29:57.040 16:07:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:29:57.040 16:07:01 -- accel/accel.sh@12 -- # build_accel_config 00:29:57.040 16:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:29:57.040 16:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:57.040 16:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:57.041 16:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:29:57.041 16:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:29:57.041 16:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:29:57.041 16:07:01 -- accel/accel.sh@42 -- # jq -r . 00:29:57.299 [2024-07-22 16:07:01.336670] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:57.299 [2024-07-22 16:07:01.336865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64892 ] 00:29:57.299 [2024-07-22 16:07:01.520446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.556 [2024-07-22 16:07:01.821639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.083 16:07:04 -- accel/accel.sh@18 -- # out=' 00:30:00.083 SPDK Configuration: 00:30:00.083 Core mask: 0x1 00:30:00.083 00:30:00.083 Accel Perf Configuration: 00:30:00.083 Workload Type: dualcast 00:30:00.083 Transfer size: 4096 bytes 00:30:00.083 Vector count 1 00:30:00.083 Module: software 00:30:00.083 Queue depth: 32 00:30:00.083 Allocate depth: 32 00:30:00.083 # threads/core: 1 00:30:00.083 Run time: 1 seconds 00:30:00.083 Verify: Yes 00:30:00.083 00:30:00.083 Running for 1 seconds... 00:30:00.083 00:30:00.083 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:00.083 ------------------------------------------------------------------------------------ 00:30:00.083 0,0 257760/s 1006 MiB/s 0 0 00:30:00.083 ==================================================================================== 00:30:00.083 Total 257760/s 1006 MiB/s 0 0' 00:30:00.083 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.083 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.083 16:07:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:30:00.083 16:07:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:30:00.083 16:07:04 -- accel/accel.sh@12 -- # build_accel_config 00:30:00.083 16:07:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:00.083 16:07:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:00.083 16:07:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:00.083 16:07:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:00.083 16:07:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:00.083 16:07:04 -- accel/accel.sh@41 -- # local IFS=, 00:30:00.083 16:07:04 -- accel/accel.sh@42 -- # jq -r . 00:30:00.083 [2024-07-22 16:07:04.225685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:00.083 [2024-07-22 16:07:04.225854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64924 ] 00:30:00.341 [2024-07-22 16:07:04.394211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.599 [2024-07-22 16:07:04.742922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.857 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.857 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.857 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.857 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.857 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.857 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.857 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=0x1 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=dualcast 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=software 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@23 -- # accel_module=software 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=32 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=32 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=1 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val=Yes 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:04 -- accel/accel.sh@20 -- # read -r var val 00:30:00.858 16:07:04 -- accel/accel.sh@21 -- # val= 00:30:00.858 16:07:04 -- accel/accel.sh@22 -- # case "$var" in 00:30:00.858 16:07:05 -- accel/accel.sh@20 -- # IFS=: 00:30:00.858 16:07:05 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:06 -- accel/accel.sh@21 -- # val= 00:30:02.788 16:07:06 -- accel/accel.sh@22 -- # case "$var" in 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # IFS=: 00:30:02.788 16:07:06 -- accel/accel.sh@20 -- # read -r var val 00:30:02.788 16:07:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:02.788 16:07:07 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:30:02.788 16:07:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:02.788 00:30:02.788 real 0m5.713s 00:30:02.788 user 0m4.970s 00:30:02.788 sys 0m0.555s 00:30:02.788 ************************************ 00:30:02.788 END TEST accel_dualcast 00:30:02.788 ************************************ 00:30:02.788 16:07:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.789 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:30:02.789 16:07:07 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:30:02.789 16:07:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:02.789 16:07:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.789 16:07:07 -- common/autotest_common.sh@10 -- # set +x 00:30:02.789 ************************************ 00:30:02.789 START TEST accel_compare 00:30:02.789 ************************************ 00:30:02.789 16:07:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:30:02.789 16:07:07 -- accel/accel.sh@16 -- # local accel_opc 00:30:02.789 16:07:07 -- accel/accel.sh@17 -- # local accel_module 00:30:02.789 16:07:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:30:02.789 16:07:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:30:02.789 16:07:07 -- accel/accel.sh@12 -- # build_accel_config 00:30:02.789 16:07:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:02.789 16:07:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:02.789 16:07:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:02.789 16:07:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:02.789 16:07:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:02.789 16:07:07 -- accel/accel.sh@41 -- # local IFS=, 00:30:02.789 16:07:07 -- accel/accel.sh@42 -- # jq -r . 00:30:03.047 [2024-07-22 16:07:07.098601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:03.047 [2024-07-22 16:07:07.098857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64976 ] 00:30:03.047 [2024-07-22 16:07:07.278785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.309 [2024-07-22 16:07:07.546760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.885 16:07:09 -- accel/accel.sh@18 -- # out=' 00:30:05.885 SPDK Configuration: 00:30:05.885 Core mask: 0x1 00:30:05.885 00:30:05.885 Accel Perf Configuration: 00:30:05.885 Workload Type: compare 00:30:05.885 Transfer size: 4096 bytes 00:30:05.885 Vector count 1 00:30:05.885 Module: software 00:30:05.885 Queue depth: 32 00:30:05.885 Allocate depth: 32 00:30:05.885 # threads/core: 1 00:30:05.885 Run time: 1 seconds 00:30:05.885 Verify: Yes 00:30:05.885 00:30:05.885 Running for 1 seconds... 00:30:05.885 00:30:05.885 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:05.885 ------------------------------------------------------------------------------------ 00:30:05.885 0,0 360576/s 1408 MiB/s 0 0 00:30:05.885 ==================================================================================== 00:30:05.885 Total 360576/s 1408 MiB/s 0 0' 00:30:05.885 16:07:09 -- accel/accel.sh@20 -- # IFS=: 00:30:05.885 16:07:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:30:05.885 16:07:09 -- accel/accel.sh@20 -- # read -r var val 00:30:05.885 16:07:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:30:05.885 16:07:09 -- accel/accel.sh@12 -- # build_accel_config 00:30:05.885 16:07:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:05.885 16:07:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:05.885 16:07:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:05.885 16:07:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:05.885 16:07:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:05.885 16:07:09 -- accel/accel.sh@41 -- # local IFS=, 00:30:05.885 16:07:09 -- accel/accel.sh@42 -- # jq -r . 00:30:05.885 [2024-07-22 16:07:09.804427] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:05.885 [2024-07-22 16:07:09.804578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65008 ] 00:30:05.885 [2024-07-22 16:07:09.970514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.143 [2024-07-22 16:07:10.276367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val=0x1 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.401 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.401 16:07:10 -- accel/accel.sh@21 -- # val=compare 00:30:06.401 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.401 16:07:10 -- accel/accel.sh@24 -- # accel_opc=compare 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val=software 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@23 -- # accel_module=software 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val=32 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val=32 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val=1 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val=Yes 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:06.402 16:07:10 -- accel/accel.sh@21 -- # val= 00:30:06.402 16:07:10 -- accel/accel.sh@22 -- # case "$var" in 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # IFS=: 00:30:06.402 16:07:10 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@21 -- # val= 00:30:08.303 16:07:12 -- accel/accel.sh@22 -- # case "$var" in 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # IFS=: 00:30:08.303 16:07:12 -- accel/accel.sh@20 -- # read -r var val 00:30:08.303 16:07:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:08.303 16:07:12 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:30:08.303 16:07:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:08.303 00:30:08.303 real 0m5.482s 00:30:08.303 user 0m4.783s 00:30:08.303 sys 0m0.515s 00:30:08.303 ************************************ 00:30:08.303 END TEST accel_compare 00:30:08.303 ************************************ 00:30:08.303 16:07:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.303 16:07:12 -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:07:12 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:30:08.562 16:07:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:08.562 16:07:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.562 16:07:12 -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 ************************************ 00:30:08.562 START TEST accel_xor 00:30:08.562 ************************************ 00:30:08.562 16:07:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:30:08.562 16:07:12 -- accel/accel.sh@16 -- # local accel_opc 00:30:08.562 16:07:12 -- accel/accel.sh@17 -- # local accel_module 00:30:08.562 16:07:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:30:08.562 16:07:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:30:08.562 16:07:12 -- accel/accel.sh@12 -- # build_accel_config 00:30:08.562 16:07:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:08.562 16:07:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:08.562 16:07:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:08.562 16:07:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:08.562 16:07:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:08.562 16:07:12 -- accel/accel.sh@41 -- # local IFS=, 00:30:08.562 16:07:12 -- accel/accel.sh@42 -- # jq -r . 00:30:08.562 [2024-07-22 16:07:12.626487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:08.562 [2024-07-22 16:07:12.626647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65054 ] 00:30:08.562 [2024-07-22 16:07:12.798983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.821 [2024-07-22 16:07:13.066870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.349 16:07:15 -- accel/accel.sh@18 -- # out=' 00:30:11.349 SPDK Configuration: 00:30:11.349 Core mask: 0x1 00:30:11.349 00:30:11.349 Accel Perf Configuration: 00:30:11.349 Workload Type: xor 00:30:11.349 Source buffers: 2 00:30:11.349 Transfer size: 4096 bytes 00:30:11.349 Vector count 1 00:30:11.349 Module: software 00:30:11.349 Queue depth: 32 00:30:11.349 Allocate depth: 32 00:30:11.349 # threads/core: 1 00:30:11.349 Run time: 1 seconds 00:30:11.349 Verify: Yes 00:30:11.349 00:30:11.349 Running for 1 seconds... 00:30:11.349 00:30:11.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:11.349 ------------------------------------------------------------------------------------ 00:30:11.349 0,0 164864/s 644 MiB/s 0 0 00:30:11.349 ==================================================================================== 00:30:11.349 Total 164864/s 644 MiB/s 0 0' 00:30:11.349 16:07:15 -- accel/accel.sh@20 -- # IFS=: 00:30:11.349 16:07:15 -- accel/accel.sh@20 -- # read -r var val 00:30:11.349 16:07:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:30:11.349 16:07:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:30:11.349 16:07:15 -- accel/accel.sh@12 -- # build_accel_config 00:30:11.349 16:07:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:11.349 16:07:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:11.349 16:07:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:11.349 16:07:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:11.349 16:07:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:11.349 16:07:15 -- accel/accel.sh@41 -- # local IFS=, 00:30:11.349 16:07:15 -- accel/accel.sh@42 -- # jq -r . 00:30:11.349 [2024-07-22 16:07:15.578344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:11.349 [2024-07-22 16:07:15.579405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65090 ] 00:30:11.607 [2024-07-22 16:07:15.752265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.865 [2024-07-22 16:07:16.087341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.122 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.122 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.122 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.122 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.122 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.122 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=0x1 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=xor 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=2 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=software 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@23 -- # accel_module=software 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=32 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=32 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=1 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val=Yes 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:12.123 16:07:16 -- accel/accel.sh@21 -- # val= 00:30:12.123 16:07:16 -- accel/accel.sh@22 -- # case "$var" in 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # IFS=: 00:30:12.123 16:07:16 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 16:07:18 -- accel/accel.sh@21 -- # val= 00:30:14.650 16:07:18 -- accel/accel.sh@22 -- # case "$var" in 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # IFS=: 00:30:14.650 16:07:18 -- accel/accel.sh@20 -- # read -r var val 00:30:14.650 ************************************ 00:30:14.650 END TEST accel_xor 00:30:14.650 ************************************ 00:30:14.650 16:07:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:14.650 16:07:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:30:14.650 16:07:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:14.650 00:30:14.650 real 0m6.022s 00:30:14.650 user 0m5.295s 00:30:14.650 sys 0m0.533s 00:30:14.650 16:07:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:14.650 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:14.650 16:07:18 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:30:14.650 16:07:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:14.651 16:07:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:14.651 16:07:18 -- common/autotest_common.sh@10 -- # set +x 00:30:14.651 ************************************ 00:30:14.651 START TEST accel_xor 00:30:14.651 ************************************ 00:30:14.651 16:07:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:30:14.651 16:07:18 -- accel/accel.sh@16 -- # local accel_opc 00:30:14.651 16:07:18 -- accel/accel.sh@17 -- # local accel_module 00:30:14.651 16:07:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:30:14.651 16:07:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:30:14.651 16:07:18 -- accel/accel.sh@12 -- # build_accel_config 00:30:14.651 16:07:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:14.651 16:07:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:14.651 16:07:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:14.651 16:07:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:14.651 16:07:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:14.651 16:07:18 -- accel/accel.sh@41 -- # local IFS=, 00:30:14.651 16:07:18 -- accel/accel.sh@42 -- # jq -r . 00:30:14.651 [2024-07-22 16:07:18.686308] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:14.651 [2024-07-22 16:07:18.686473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65138 ] 00:30:14.651 [2024-07-22 16:07:18.855895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.908 [2024-07-22 16:07:19.130072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.437 16:07:21 -- accel/accel.sh@18 -- # out=' 00:30:17.437 SPDK Configuration: 00:30:17.437 Core mask: 0x1 00:30:17.437 00:30:17.437 Accel Perf Configuration: 00:30:17.437 Workload Type: xor 00:30:17.437 Source buffers: 3 00:30:17.437 Transfer size: 4096 bytes 00:30:17.437 Vector count 1 00:30:17.437 Module: software 00:30:17.437 Queue depth: 32 00:30:17.437 Allocate depth: 32 00:30:17.437 # threads/core: 1 00:30:17.437 Run time: 1 seconds 00:30:17.437 Verify: Yes 00:30:17.437 00:30:17.437 Running for 1 seconds... 00:30:17.437 00:30:17.437 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:17.437 ------------------------------------------------------------------------------------ 00:30:17.437 0,0 174560/s 681 MiB/s 0 0 00:30:17.437 ==================================================================================== 00:30:17.437 Total 174560/s 681 MiB/s 0 0' 00:30:17.437 16:07:21 -- accel/accel.sh@20 -- # IFS=: 00:30:17.437 16:07:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:30:17.437 16:07:21 -- accel/accel.sh@20 -- # read -r var val 00:30:17.437 16:07:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:30:17.437 16:07:21 -- accel/accel.sh@12 -- # build_accel_config 00:30:17.437 16:07:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:17.437 16:07:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:17.437 16:07:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:17.437 16:07:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:17.437 16:07:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:17.437 16:07:21 -- accel/accel.sh@41 -- # local IFS=, 00:30:17.437 16:07:21 -- accel/accel.sh@42 -- # jq -r . 00:30:17.437 [2024-07-22 16:07:21.411283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:17.437 [2024-07-22 16:07:21.411447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65175 ] 00:30:17.437 [2024-07-22 16:07:21.578979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.695 [2024-07-22 16:07:21.860384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=0x1 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=xor 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=3 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=software 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@23 -- # accel_module=software 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=32 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=32 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=1 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val=Yes 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:17.953 16:07:22 -- accel/accel.sh@21 -- # val= 00:30:17.953 16:07:22 -- accel/accel.sh@22 -- # case "$var" in 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # IFS=: 00:30:17.953 16:07:22 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@21 -- # val= 00:30:19.852 16:07:24 -- accel/accel.sh@22 -- # case "$var" in 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # IFS=: 00:30:19.852 16:07:24 -- accel/accel.sh@20 -- # read -r var val 00:30:19.852 16:07:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:19.852 16:07:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:30:19.852 ************************************ 00:30:19.852 END TEST accel_xor 00:30:19.852 ************************************ 00:30:19.852 16:07:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:19.852 00:30:19.852 real 0m5.475s 00:30:19.852 user 0m4.755s 00:30:19.852 sys 0m0.535s 00:30:19.852 16:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.852 16:07:24 -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 16:07:24 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:30:20.111 16:07:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:20.111 16:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:20.111 16:07:24 -- common/autotest_common.sh@10 -- # set +x 00:30:20.111 ************************************ 00:30:20.111 START TEST accel_dif_verify 00:30:20.111 ************************************ 00:30:20.111 16:07:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:30:20.111 16:07:24 -- accel/accel.sh@16 -- # local accel_opc 00:30:20.111 16:07:24 -- accel/accel.sh@17 -- # local accel_module 00:30:20.111 16:07:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:30:20.111 16:07:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:30:20.111 16:07:24 -- accel/accel.sh@12 -- # build_accel_config 00:30:20.111 16:07:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:20.111 16:07:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:20.111 16:07:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:20.111 16:07:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:20.111 16:07:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:20.111 16:07:24 -- accel/accel.sh@41 -- # local IFS=, 00:30:20.111 16:07:24 -- accel/accel.sh@42 -- # jq -r . 00:30:20.111 [2024-07-22 16:07:24.209497] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:20.111 [2024-07-22 16:07:24.209860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65222 ] 00:30:20.111 [2024-07-22 16:07:24.375872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.677 [2024-07-22 16:07:24.682120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.206 16:07:26 -- accel/accel.sh@18 -- # out=' 00:30:23.206 SPDK Configuration: 00:30:23.206 Core mask: 0x1 00:30:23.206 00:30:23.206 Accel Perf Configuration: 00:30:23.206 Workload Type: dif_verify 00:30:23.206 Vector size: 4096 bytes 00:30:23.206 Transfer size: 4096 bytes 00:30:23.206 Block size: 512 bytes 00:30:23.206 Metadata size: 8 bytes 00:30:23.206 Vector count 1 00:30:23.206 Module: software 00:30:23.206 Queue depth: 32 00:30:23.206 Allocate depth: 32 00:30:23.206 # threads/core: 1 00:30:23.206 Run time: 1 seconds 00:30:23.206 Verify: No 00:30:23.206 00:30:23.206 Running for 1 seconds... 00:30:23.206 00:30:23.206 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:23.206 ------------------------------------------------------------------------------------ 00:30:23.206 0,0 88800/s 352 MiB/s 0 0 00:30:23.206 ==================================================================================== 00:30:23.206 Total 88800/s 346 MiB/s 0 0' 00:30:23.206 16:07:26 -- accel/accel.sh@20 -- # IFS=: 00:30:23.206 16:07:26 -- accel/accel.sh@20 -- # read -r var val 00:30:23.206 16:07:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:30:23.206 16:07:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:30:23.206 16:07:26 -- accel/accel.sh@12 -- # build_accel_config 00:30:23.206 16:07:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:23.206 16:07:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:23.206 16:07:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:23.206 16:07:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:23.206 16:07:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:23.206 16:07:26 -- accel/accel.sh@41 -- # local IFS=, 00:30:23.206 16:07:26 -- accel/accel.sh@42 -- # jq -r . 00:30:23.206 [2024-07-22 16:07:26.996403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:23.206 [2024-07-22 16:07:26.996901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65259 ] 00:30:23.206 [2024-07-22 16:07:27.174790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.206 [2024-07-22 16:07:27.446262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=0x1 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=dif_verify 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=software 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@23 -- # accel_module=software 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=32 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=32 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=1 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val=No 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:23.464 16:07:27 -- accel/accel.sh@21 -- # val= 00:30:23.464 16:07:27 -- accel/accel.sh@22 -- # case "$var" in 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # IFS=: 00:30:23.464 16:07:27 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@21 -- # val= 00:30:25.994 16:07:29 -- accel/accel.sh@22 -- # case "$var" in 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # IFS=: 00:30:25.994 16:07:29 -- accel/accel.sh@20 -- # read -r var val 00:30:25.994 16:07:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:25.994 16:07:29 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:30:25.994 ************************************ 00:30:25.994 END TEST accel_dif_verify 00:30:25.994 ************************************ 00:30:25.994 16:07:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:25.995 00:30:25.995 real 0m5.539s 00:30:25.995 user 0m4.850s 00:30:25.995 sys 0m0.504s 00:30:25.995 16:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:25.995 16:07:29 -- common/autotest_common.sh@10 -- # set +x 00:30:25.995 16:07:29 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:30:25.995 16:07:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:25.995 16:07:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:25.995 16:07:29 -- common/autotest_common.sh@10 -- # set +x 00:30:25.995 ************************************ 00:30:25.995 START TEST accel_dif_generate 00:30:25.995 ************************************ 00:30:25.995 16:07:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:30:25.995 16:07:29 -- accel/accel.sh@16 -- # local accel_opc 00:30:25.995 16:07:29 -- accel/accel.sh@17 -- # local accel_module 00:30:25.995 16:07:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:30:25.995 16:07:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:30:25.995 16:07:29 -- accel/accel.sh@12 -- # build_accel_config 00:30:25.995 16:07:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:25.995 16:07:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:25.995 16:07:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:25.995 16:07:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:25.995 16:07:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:25.995 16:07:29 -- accel/accel.sh@41 -- # local IFS=, 00:30:25.995 16:07:29 -- accel/accel.sh@42 -- # jq -r . 00:30:25.995 [2024-07-22 16:07:29.808740] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:25.995 [2024-07-22 16:07:29.808936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65300 ] 00:30:25.995 [2024-07-22 16:07:29.986307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.252 [2024-07-22 16:07:30.303032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.782 16:07:32 -- accel/accel.sh@18 -- # out=' 00:30:28.782 SPDK Configuration: 00:30:28.782 Core mask: 0x1 00:30:28.782 00:30:28.782 Accel Perf Configuration: 00:30:28.782 Workload Type: dif_generate 00:30:28.782 Vector size: 4096 bytes 00:30:28.782 Transfer size: 4096 bytes 00:30:28.782 Block size: 512 bytes 00:30:28.782 Metadata size: 8 bytes 00:30:28.782 Vector count 1 00:30:28.782 Module: software 00:30:28.782 Queue depth: 32 00:30:28.782 Allocate depth: 32 00:30:28.782 # threads/core: 1 00:30:28.782 Run time: 1 seconds 00:30:28.782 Verify: No 00:30:28.782 00:30:28.782 Running for 1 seconds... 00:30:28.782 00:30:28.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:28.782 ------------------------------------------------------------------------------------ 00:30:28.782 0,0 106208/s 421 MiB/s 0 0 00:30:28.782 ==================================================================================== 00:30:28.782 Total 106208/s 414 MiB/s 0 0' 00:30:28.782 16:07:32 -- accel/accel.sh@20 -- # IFS=: 00:30:28.782 16:07:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:30:28.782 16:07:32 -- accel/accel.sh@20 -- # read -r var val 00:30:28.782 16:07:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:30:28.782 16:07:32 -- accel/accel.sh@12 -- # build_accel_config 00:30:28.782 16:07:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:28.782 16:07:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:28.782 16:07:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:28.782 16:07:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:28.782 16:07:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:28.782 16:07:32 -- accel/accel.sh@41 -- # local IFS=, 00:30:28.782 16:07:32 -- accel/accel.sh@42 -- # jq -r . 00:30:28.782 [2024-07-22 16:07:32.619418] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:28.782 [2024-07-22 16:07:32.619570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65337 ] 00:30:28.782 [2024-07-22 16:07:32.784699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.040 [2024-07-22 16:07:33.068785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val=0x1 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val=dif_generate 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val=software 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@23 -- # accel_module=software 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.040 16:07:33 -- accel/accel.sh@21 -- # val=32 00:30:29.040 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.040 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val=32 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val=1 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val=No 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:29.298 16:07:33 -- accel/accel.sh@21 -- # val= 00:30:29.298 16:07:33 -- accel/accel.sh@22 -- # case "$var" in 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # IFS=: 00:30:29.298 16:07:33 -- accel/accel.sh@20 -- # read -r var val 00:30:31.195 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.195 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.195 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.195 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.195 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.196 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.196 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.196 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.196 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.196 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.196 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.196 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.196 16:07:35 -- accel/accel.sh@21 -- # val= 00:30:31.196 16:07:35 -- accel/accel.sh@22 -- # case "$var" in 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # IFS=: 00:30:31.196 16:07:35 -- accel/accel.sh@20 -- # read -r var val 00:30:31.196 16:07:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:31.196 16:07:35 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:30:31.196 16:07:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:31.196 00:30:31.196 real 0m5.592s 00:30:31.196 user 0m4.904s 00:30:31.196 sys 0m0.502s 00:30:31.196 16:07:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.196 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:30:31.196 ************************************ 00:30:31.196 END TEST accel_dif_generate 00:30:31.196 ************************************ 00:30:31.196 16:07:35 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:30:31.196 16:07:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:31.196 16:07:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:31.196 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:30:31.196 ************************************ 00:30:31.196 START TEST accel_dif_generate_copy 00:30:31.196 ************************************ 00:30:31.196 16:07:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:30:31.196 16:07:35 -- accel/accel.sh@16 -- # local accel_opc 00:30:31.196 16:07:35 -- accel/accel.sh@17 -- # local accel_module 00:30:31.196 16:07:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:30:31.196 16:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:30:31.196 16:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:30:31.196 16:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:31.196 16:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:31.196 16:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:31.196 16:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:31.196 16:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:31.196 16:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:30:31.196 16:07:35 -- accel/accel.sh@42 -- # jq -r . 00:30:31.196 [2024-07-22 16:07:35.451315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:31.196 [2024-07-22 16:07:35.451511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65384 ] 00:30:31.493 [2024-07-22 16:07:35.630635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.754 [2024-07-22 16:07:35.909841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.280 16:07:38 -- accel/accel.sh@18 -- # out=' 00:30:34.280 SPDK Configuration: 00:30:34.280 Core mask: 0x1 00:30:34.280 00:30:34.280 Accel Perf Configuration: 00:30:34.280 Workload Type: dif_generate_copy 00:30:34.280 Vector size: 4096 bytes 00:30:34.280 Transfer size: 4096 bytes 00:30:34.280 Vector count 1 00:30:34.280 Module: software 00:30:34.280 Queue depth: 32 00:30:34.281 Allocate depth: 32 00:30:34.281 # threads/core: 1 00:30:34.281 Run time: 1 seconds 00:30:34.281 Verify: No 00:30:34.281 00:30:34.281 Running for 1 seconds... 00:30:34.281 00:30:34.281 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:34.281 ------------------------------------------------------------------------------------ 00:30:34.281 0,0 79008/s 313 MiB/s 0 0 00:30:34.281 ==================================================================================== 00:30:34.281 Total 79008/s 308 MiB/s 0 0' 00:30:34.281 16:07:38 -- accel/accel.sh@20 -- # IFS=: 00:30:34.281 16:07:38 -- accel/accel.sh@20 -- # read -r var val 00:30:34.281 16:07:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:30:34.281 16:07:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:30:34.281 16:07:38 -- accel/accel.sh@12 -- # build_accel_config 00:30:34.281 16:07:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:34.281 16:07:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:34.281 16:07:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:34.281 16:07:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:34.281 16:07:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:34.281 16:07:38 -- accel/accel.sh@41 -- # local IFS=, 00:30:34.281 16:07:38 -- accel/accel.sh@42 -- # jq -r . 00:30:34.281 [2024-07-22 16:07:38.402451] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:34.281 [2024-07-22 16:07:38.402667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65421 ] 00:30:34.538 [2024-07-22 16:07:38.575743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.795 [2024-07-22 16:07:38.899373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=0x1 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=software 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@23 -- # accel_module=software 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=32 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=32 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=1 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val=No 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:35.053 16:07:39 -- accel/accel.sh@21 -- # val= 00:30:35.053 16:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # IFS=: 00:30:35.053 16:07:39 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@21 -- # val= 00:30:36.950 16:07:41 -- accel/accel.sh@22 -- # case "$var" in 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # IFS=: 00:30:36.950 16:07:41 -- accel/accel.sh@20 -- # read -r var val 00:30:36.950 16:07:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:36.950 16:07:41 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:30:36.950 16:07:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:36.950 00:30:36.950 real 0m5.814s 00:30:36.950 user 0m5.077s 00:30:36.950 sys 0m0.547s 00:30:36.950 16:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.950 16:07:41 -- common/autotest_common.sh@10 -- # set +x 00:30:36.950 ************************************ 00:30:36.950 END TEST accel_dif_generate_copy 00:30:36.950 ************************************ 00:30:37.209 16:07:41 -- accel/accel.sh@107 -- # [[ y == y ]] 00:30:37.209 16:07:41 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:37.209 16:07:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:30:37.209 16:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.209 16:07:41 -- common/autotest_common.sh@10 -- # set +x 00:30:37.209 ************************************ 00:30:37.209 START TEST accel_comp 00:30:37.209 ************************************ 00:30:37.209 16:07:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:37.209 16:07:41 -- accel/accel.sh@16 -- # local accel_opc 00:30:37.209 16:07:41 -- accel/accel.sh@17 -- # local accel_module 00:30:37.209 16:07:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:37.209 16:07:41 -- accel/accel.sh@12 -- # build_accel_config 00:30:37.209 16:07:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:37.209 16:07:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:37.209 16:07:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:37.209 16:07:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:37.209 16:07:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:37.209 16:07:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:37.209 16:07:41 -- accel/accel.sh@41 -- # local IFS=, 00:30:37.209 16:07:41 -- accel/accel.sh@42 -- # jq -r . 00:30:37.209 [2024-07-22 16:07:41.311184] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:37.209 [2024-07-22 16:07:41.311462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65468 ] 00:30:37.466 [2024-07-22 16:07:41.488059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.724 [2024-07-22 16:07:41.806090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.253 16:07:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:30:40.253 00:30:40.253 SPDK Configuration: 00:30:40.253 Core mask: 0x1 00:30:40.253 00:30:40.253 Accel Perf Configuration: 00:30:40.253 Workload Type: compress 00:30:40.253 Transfer size: 4096 bytes 00:30:40.253 Vector count 1 00:30:40.253 Module: software 00:30:40.253 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:40.253 Queue depth: 32 00:30:40.253 Allocate depth: 32 00:30:40.253 # threads/core: 1 00:30:40.253 Run time: 1 seconds 00:30:40.253 Verify: No 00:30:40.253 00:30:40.253 Running for 1 seconds... 00:30:40.253 00:30:40.253 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:40.253 ------------------------------------------------------------------------------------ 00:30:40.253 0,0 46016/s 191 MiB/s 0 0 00:30:40.253 ==================================================================================== 00:30:40.253 Total 46016/s 179 MiB/s 0 0' 00:30:40.253 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.253 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.253 16:07:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:40.253 16:07:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:40.253 16:07:44 -- accel/accel.sh@12 -- # build_accel_config 00:30:40.253 16:07:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:40.253 16:07:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:40.253 16:07:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:40.253 16:07:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:40.253 16:07:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:40.253 16:07:44 -- accel/accel.sh@41 -- # local IFS=, 00:30:40.253 16:07:44 -- accel/accel.sh@42 -- # jq -r . 00:30:40.253 [2024-07-22 16:07:44.126117] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:40.253 [2024-07-22 16:07:44.126310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65499 ] 00:30:40.253 [2024-07-22 16:07:44.300852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.512 [2024-07-22 16:07:44.558706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val=0x1 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val=compress 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@24 -- # accel_opc=compress 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.770 16:07:44 -- accel/accel.sh@21 -- # val=software 00:30:40.770 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.770 16:07:44 -- accel/accel.sh@23 -- # accel_module=software 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.770 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val=32 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val=32 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val=1 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val=No 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:40.771 16:07:44 -- accel/accel.sh@21 -- # val= 00:30:40.771 16:07:44 -- accel/accel.sh@22 -- # case "$var" in 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # IFS=: 00:30:40.771 16:07:44 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@21 -- # val= 00:30:42.697 16:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # IFS=: 00:30:42.697 16:07:46 -- accel/accel.sh@20 -- # read -r var val 00:30:42.697 16:07:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:42.697 16:07:46 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:30:42.697 16:07:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:42.697 00:30:42.697 real 0m5.523s 00:30:42.697 user 0m4.828s 00:30:42.697 sys 0m0.512s 00:30:42.697 16:07:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.697 16:07:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.697 ************************************ 00:30:42.697 END TEST accel_comp 00:30:42.697 ************************************ 00:30:42.697 16:07:46 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:42.697 16:07:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:42.697 16:07:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.697 16:07:46 -- common/autotest_common.sh@10 -- # set +x 00:30:42.697 ************************************ 00:30:42.697 START TEST accel_decomp 00:30:42.697 ************************************ 00:30:42.697 16:07:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:42.697 16:07:46 -- accel/accel.sh@16 -- # local accel_opc 00:30:42.697 16:07:46 -- accel/accel.sh@17 -- # local accel_module 00:30:42.697 16:07:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:42.697 16:07:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:42.697 16:07:46 -- accel/accel.sh@12 -- # build_accel_config 00:30:42.697 16:07:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:42.697 16:07:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:42.697 16:07:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:42.697 16:07:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:42.697 16:07:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:42.697 16:07:46 -- accel/accel.sh@41 -- # local IFS=, 00:30:42.697 16:07:46 -- accel/accel.sh@42 -- # jq -r . 00:30:42.697 [2024-07-22 16:07:46.888236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:42.697 [2024-07-22 16:07:46.888440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65546 ] 00:30:42.955 [2024-07-22 16:07:47.068735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.214 [2024-07-22 16:07:47.335971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.743 16:07:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:30:45.743 00:30:45.743 SPDK Configuration: 00:30:45.743 Core mask: 0x1 00:30:45.743 00:30:45.743 Accel Perf Configuration: 00:30:45.743 Workload Type: decompress 00:30:45.743 Transfer size: 4096 bytes 00:30:45.743 Vector count 1 00:30:45.743 Module: software 00:30:45.743 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:45.743 Queue depth: 32 00:30:45.743 Allocate depth: 32 00:30:45.743 # threads/core: 1 00:30:45.743 Run time: 1 seconds 00:30:45.743 Verify: Yes 00:30:45.743 00:30:45.743 Running for 1 seconds... 00:30:45.743 00:30:45.743 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:45.743 ------------------------------------------------------------------------------------ 00:30:45.743 0,0 57824/s 106 MiB/s 0 0 00:30:45.743 ==================================================================================== 00:30:45.743 Total 57824/s 225 MiB/s 0 0' 00:30:45.743 16:07:49 -- accel/accel.sh@20 -- # IFS=: 00:30:45.743 16:07:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:45.743 16:07:49 -- accel/accel.sh@20 -- # read -r var val 00:30:45.743 16:07:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:30:45.743 16:07:49 -- accel/accel.sh@12 -- # build_accel_config 00:30:45.743 16:07:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:45.743 16:07:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:45.743 16:07:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:45.743 16:07:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:45.743 16:07:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:45.743 16:07:49 -- accel/accel.sh@41 -- # local IFS=, 00:30:45.743 16:07:49 -- accel/accel.sh@42 -- # jq -r . 00:30:45.743 [2024-07-22 16:07:49.612033] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:45.743 [2024-07-22 16:07:49.612255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65583 ] 00:30:45.743 [2024-07-22 16:07:49.790076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.001 [2024-07-22 16:07:50.109429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val=0x1 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.259 16:07:50 -- accel/accel.sh@21 -- # val=decompress 00:30:46.259 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.259 16:07:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.259 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=software 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@23 -- # accel_module=software 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=32 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=32 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=1 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val=Yes 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:46.260 16:07:50 -- accel/accel.sh@21 -- # val= 00:30:46.260 16:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # IFS=: 00:30:46.260 16:07:50 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@21 -- # val= 00:30:48.162 16:07:52 -- accel/accel.sh@22 -- # case "$var" in 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # IFS=: 00:30:48.162 16:07:52 -- accel/accel.sh@20 -- # read -r var val 00:30:48.162 16:07:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:48.162 16:07:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:30:48.162 16:07:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:48.162 00:30:48.162 real 0m5.562s 00:30:48.162 user 0m4.847s 00:30:48.162 sys 0m0.529s 00:30:48.162 16:07:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:48.162 16:07:52 -- common/autotest_common.sh@10 -- # set +x 00:30:48.162 ************************************ 00:30:48.162 END TEST accel_decomp 00:30:48.162 ************************************ 00:30:48.421 16:07:52 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:48.422 16:07:52 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:30:48.422 16:07:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:48.422 16:07:52 -- common/autotest_common.sh@10 -- # set +x 00:30:48.422 ************************************ 00:30:48.422 START TEST accel_decmop_full 00:30:48.422 ************************************ 00:30:48.422 16:07:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:48.422 16:07:52 -- accel/accel.sh@16 -- # local accel_opc 00:30:48.422 16:07:52 -- accel/accel.sh@17 -- # local accel_module 00:30:48.422 16:07:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:48.422 16:07:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:48.422 16:07:52 -- accel/accel.sh@12 -- # build_accel_config 00:30:48.422 16:07:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:48.422 16:07:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:48.422 16:07:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:48.422 16:07:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:48.422 16:07:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:48.422 16:07:52 -- accel/accel.sh@41 -- # local IFS=, 00:30:48.422 16:07:52 -- accel/accel.sh@42 -- # jq -r . 00:30:48.422 [2024-07-22 16:07:52.503179] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:48.422 [2024-07-22 16:07:52.503350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65630 ] 00:30:48.422 [2024-07-22 16:07:52.677109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.989 [2024-07-22 16:07:52.954217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.520 16:07:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:30:51.520 00:30:51.520 SPDK Configuration: 00:30:51.520 Core mask: 0x1 00:30:51.520 00:30:51.520 Accel Perf Configuration: 00:30:51.520 Workload Type: decompress 00:30:51.520 Transfer size: 111250 bytes 00:30:51.520 Vector count 1 00:30:51.520 Module: software 00:30:51.520 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:51.520 Queue depth: 32 00:30:51.520 Allocate depth: 32 00:30:51.520 # threads/core: 1 00:30:51.520 Run time: 1 seconds 00:30:51.520 Verify: Yes 00:30:51.520 00:30:51.520 Running for 1 seconds... 00:30:51.520 00:30:51.520 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:51.520 ------------------------------------------------------------------------------------ 00:30:51.520 0,0 4192/s 173 MiB/s 0 0 00:30:51.520 ==================================================================================== 00:30:51.520 Total 4192/s 444 MiB/s 0 0' 00:30:51.520 16:07:55 -- accel/accel.sh@20 -- # IFS=: 00:30:51.520 16:07:55 -- accel/accel.sh@20 -- # read -r var val 00:30:51.520 16:07:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:51.520 16:07:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:30:51.520 16:07:55 -- accel/accel.sh@12 -- # build_accel_config 00:30:51.520 16:07:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:51.520 16:07:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:51.520 16:07:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:51.520 16:07:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:51.520 16:07:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:51.520 16:07:55 -- accel/accel.sh@41 -- # local IFS=, 00:30:51.520 16:07:55 -- accel/accel.sh@42 -- # jq -r . 00:30:51.520 [2024-07-22 16:07:55.335643] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:51.520 [2024-07-22 16:07:55.335832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65661 ] 00:30:51.520 [2024-07-22 16:07:55.517091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.778 [2024-07-22 16:07:55.828876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=0x1 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=decompress 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val='111250 bytes' 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=software 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@23 -- # accel_module=software 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=32 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=32 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=1 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val=Yes 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:52.037 16:07:56 -- accel/accel.sh@21 -- # val= 00:30:52.037 16:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # IFS=: 00:30:52.037 16:07:56 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.939 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.939 16:07:58 -- accel/accel.sh@21 -- # val= 00:30:53.939 16:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:30:53.940 16:07:58 -- accel/accel.sh@20 -- # IFS=: 00:30:53.940 16:07:58 -- accel/accel.sh@20 -- # read -r var val 00:30:53.940 16:07:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:30:53.940 16:07:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:30:53.940 16:07:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:53.940 00:30:53.940 real 0m5.687s 00:30:53.940 user 0m4.940s 00:30:53.940 sys 0m0.562s 00:30:53.940 16:07:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.940 16:07:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.940 ************************************ 00:30:53.940 END TEST accel_decmop_full 00:30:53.940 ************************************ 00:30:53.940 16:07:58 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:53.940 16:07:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:30:53.940 16:07:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.940 16:07:58 -- common/autotest_common.sh@10 -- # set +x 00:30:53.940 ************************************ 00:30:53.940 START TEST accel_decomp_mcore 00:30:53.940 ************************************ 00:30:53.940 16:07:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:53.940 16:07:58 -- accel/accel.sh@16 -- # local accel_opc 00:30:53.940 16:07:58 -- accel/accel.sh@17 -- # local accel_module 00:30:53.940 16:07:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:53.940 16:07:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:53.940 16:07:58 -- accel/accel.sh@12 -- # build_accel_config 00:30:53.940 16:07:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:53.940 16:07:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:53.940 16:07:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:53.940 16:07:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:53.940 16:07:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:53.940 16:07:58 -- accel/accel.sh@41 -- # local IFS=, 00:30:53.940 16:07:58 -- accel/accel.sh@42 -- # jq -r . 00:30:54.209 [2024-07-22 16:07:58.247261] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:54.209 [2024-07-22 16:07:58.247466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65709 ] 00:30:54.209 [2024-07-22 16:07:58.430032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:54.477 [2024-07-22 16:07:58.706057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.477 [2024-07-22 16:07:58.706154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.477 [2024-07-22 16:07:58.706382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.477 [2024-07-22 16:07:58.706437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.008 16:08:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:30:57.008 00:30:57.008 SPDK Configuration: 00:30:57.008 Core mask: 0xf 00:30:57.008 00:30:57.008 Accel Perf Configuration: 00:30:57.008 Workload Type: decompress 00:30:57.008 Transfer size: 4096 bytes 00:30:57.008 Vector count 1 00:30:57.008 Module: software 00:30:57.008 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:57.008 Queue depth: 32 00:30:57.008 Allocate depth: 32 00:30:57.008 # threads/core: 1 00:30:57.008 Run time: 1 seconds 00:30:57.008 Verify: Yes 00:30:57.008 00:30:57.008 Running for 1 seconds... 00:30:57.008 00:30:57.008 Core,Thread Transfers Bandwidth Failed Miscompares 00:30:57.008 ------------------------------------------------------------------------------------ 00:30:57.008 0,0 45984/s 84 MiB/s 0 0 00:30:57.008 3,0 45088/s 83 MiB/s 0 0 00:30:57.008 2,0 46464/s 85 MiB/s 0 0 00:30:57.008 1,0 46336/s 85 MiB/s 0 0 00:30:57.008 ==================================================================================== 00:30:57.008 Total 183872/s 718 MiB/s 0 0' 00:30:57.008 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.008 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.008 16:08:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:57.008 16:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:30:57.008 16:08:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:30:57.008 16:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:30:57.008 16:08:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:57.008 16:08:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:57.008 16:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:30:57.008 16:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:30:57.008 16:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:30:57.008 16:08:01 -- accel/accel.sh@42 -- # jq -r . 00:30:57.008 [2024-07-22 16:08:01.053303] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:57.008 [2024-07-22 16:08:01.053755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65748 ] 00:30:57.008 [2024-07-22 16:08:01.231279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.266 [2024-07-22 16:08:01.529098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.266 [2024-07-22 16:08:01.529476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.266 [2024-07-22 16:08:01.529429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.266 [2024-07-22 16:08:01.529230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=0xf 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=decompress 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=software 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@23 -- # accel_module=software 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=32 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=32 00:30:57.524 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.524 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.524 16:08:01 -- accel/accel.sh@21 -- # val=1 00:30:57.525 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.525 16:08:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:30:57.525 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.525 16:08:01 -- accel/accel.sh@21 -- # val=Yes 00:30:57.525 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.525 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.525 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:30:57.525 16:08:01 -- accel/accel.sh@21 -- # val= 00:30:57.525 16:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # IFS=: 00:30:57.525 16:08:01 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@21 -- # val= 00:31:00.057 16:08:03 -- accel/accel.sh@22 -- # case "$var" in 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # IFS=: 00:31:00.057 16:08:03 -- accel/accel.sh@20 -- # read -r var val 00:31:00.057 16:08:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:31:00.057 16:08:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:31:00.057 16:08:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:00.057 00:31:00.057 real 0m5.638s 00:31:00.057 user 0m7.824s 00:31:00.057 sys 0m0.306s 00:31:00.057 ************************************ 00:31:00.057 16:08:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.057 16:08:03 -- common/autotest_common.sh@10 -- # set +x 00:31:00.057 END TEST accel_decomp_mcore 00:31:00.057 ************************************ 00:31:00.057 16:08:03 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:00.057 16:08:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:00.057 16:08:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.057 16:08:03 -- common/autotest_common.sh@10 -- # set +x 00:31:00.057 ************************************ 00:31:00.057 START TEST accel_decomp_full_mcore 00:31:00.057 ************************************ 00:31:00.057 16:08:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:00.057 16:08:03 -- accel/accel.sh@16 -- # local accel_opc 00:31:00.057 16:08:03 -- accel/accel.sh@17 -- # local accel_module 00:31:00.057 16:08:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:00.057 16:08:03 -- accel/accel.sh@12 -- # build_accel_config 00:31:00.057 16:08:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:00.057 16:08:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:00.057 16:08:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:00.057 16:08:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:00.057 16:08:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:00.057 16:08:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:00.057 16:08:03 -- accel/accel.sh@41 -- # local IFS=, 00:31:00.057 16:08:03 -- accel/accel.sh@42 -- # jq -r . 00:31:00.057 [2024-07-22 16:08:03.941041] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:00.057 [2024-07-22 16:08:03.941285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65802 ] 00:31:00.057 [2024-07-22 16:08:04.125110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.316 [2024-07-22 16:08:04.434868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.316 [2024-07-22 16:08:04.435062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.316 [2024-07-22 16:08:04.435182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.316 [2024-07-22 16:08:04.435291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.847 16:08:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:31:02.847 00:31:02.847 SPDK Configuration: 00:31:02.847 Core mask: 0xf 00:31:02.847 00:31:02.847 Accel Perf Configuration: 00:31:02.847 Workload Type: decompress 00:31:02.847 Transfer size: 111250 bytes 00:31:02.847 Vector count 1 00:31:02.847 Module: software 00:31:02.847 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:02.847 Queue depth: 32 00:31:02.847 Allocate depth: 32 00:31:02.847 # threads/core: 1 00:31:02.847 Run time: 1 seconds 00:31:02.847 Verify: Yes 00:31:02.847 00:31:02.847 Running for 1 seconds... 00:31:02.847 00:31:02.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:31:02.847 ------------------------------------------------------------------------------------ 00:31:02.847 0,0 4320/s 178 MiB/s 0 0 00:31:02.847 3,0 4320/s 178 MiB/s 0 0 00:31:02.847 2,0 4320/s 178 MiB/s 0 0 00:31:02.847 1,0 4384/s 181 MiB/s 0 0 00:31:02.847 ==================================================================================== 00:31:02.847 Total 17344/s 1840 MiB/s 0 0' 00:31:02.847 16:08:06 -- accel/accel.sh@20 -- # IFS=: 00:31:02.847 16:08:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:02.847 16:08:06 -- accel/accel.sh@20 -- # read -r var val 00:31:02.847 16:08:06 -- accel/accel.sh@12 -- # build_accel_config 00:31:02.847 16:08:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:31:02.847 16:08:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:02.847 16:08:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:02.847 16:08:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:02.847 16:08:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:02.847 16:08:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:02.847 16:08:06 -- accel/accel.sh@41 -- # local IFS=, 00:31:02.847 16:08:06 -- accel/accel.sh@42 -- # jq -r . 00:31:02.847 [2024-07-22 16:08:06.810237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:02.847 [2024-07-22 16:08:06.810503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65838 ] 00:31:02.847 [2024-07-22 16:08:06.992160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:03.106 [2024-07-22 16:08:07.284339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.106 [2024-07-22 16:08:07.284442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:03.106 [2024-07-22 16:08:07.284524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.106 [2024-07-22 16:08:07.284540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=0xf 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=decompress 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=software 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@23 -- # accel_module=software 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=32 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=32 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=1 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val=Yes 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:03.365 16:08:07 -- accel/accel.sh@21 -- # val= 00:31:03.365 16:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # IFS=: 00:31:03.365 16:08:07 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.889 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.889 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.889 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.890 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.890 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.890 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.890 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.890 16:08:09 -- accel/accel.sh@21 -- # val= 00:31:05.890 16:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:31:05.890 16:08:09 -- accel/accel.sh@20 -- # IFS=: 00:31:05.890 16:08:09 -- accel/accel.sh@20 -- # read -r var val 00:31:05.890 16:08:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:31:05.890 16:08:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:31:05.890 16:08:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.890 00:31:05.890 real 0m5.730s 00:31:05.890 user 0m7.977s 00:31:05.890 sys 0m0.299s 00:31:05.890 16:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.890 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:31:05.890 ************************************ 00:31:05.890 END TEST accel_decomp_full_mcore 00:31:05.890 ************************************ 00:31:05.890 16:08:09 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:05.890 16:08:09 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:31:05.890 16:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:05.890 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:31:05.890 ************************************ 00:31:05.890 START TEST accel_decomp_mthread 00:31:05.890 ************************************ 00:31:05.890 16:08:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:05.890 16:08:09 -- accel/accel.sh@16 -- # local accel_opc 00:31:05.890 16:08:09 -- accel/accel.sh@17 -- # local accel_module 00:31:05.890 16:08:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:05.890 16:08:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:05.890 16:08:09 -- accel/accel.sh@12 -- # build_accel_config 00:31:05.890 16:08:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:05.890 16:08:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:05.890 16:08:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:05.890 16:08:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:05.890 16:08:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:05.890 16:08:09 -- accel/accel.sh@41 -- # local IFS=, 00:31:05.890 16:08:09 -- accel/accel.sh@42 -- # jq -r . 00:31:05.890 [2024-07-22 16:08:09.719593] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:05.890 [2024-07-22 16:08:09.719820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65887 ] 00:31:05.890 [2024-07-22 16:08:09.899956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.157 [2024-07-22 16:08:10.174364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.705 16:08:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:31:08.705 00:31:08.705 SPDK Configuration: 00:31:08.705 Core mask: 0x1 00:31:08.705 00:31:08.705 Accel Perf Configuration: 00:31:08.705 Workload Type: decompress 00:31:08.705 Transfer size: 4096 bytes 00:31:08.705 Vector count 1 00:31:08.705 Module: software 00:31:08.705 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:08.705 Queue depth: 32 00:31:08.705 Allocate depth: 32 00:31:08.705 # threads/core: 2 00:31:08.705 Run time: 1 seconds 00:31:08.705 Verify: Yes 00:31:08.705 00:31:08.705 Running for 1 seconds... 00:31:08.705 00:31:08.705 Core,Thread Transfers Bandwidth Failed Miscompares 00:31:08.705 ------------------------------------------------------------------------------------ 00:31:08.705 0,1 27904/s 51 MiB/s 0 0 00:31:08.705 0,0 27744/s 51 MiB/s 0 0 00:31:08.705 ==================================================================================== 00:31:08.705 Total 55648/s 217 MiB/s 0 0' 00:31:08.705 16:08:12 -- accel/accel.sh@20 -- # IFS=: 00:31:08.705 16:08:12 -- accel/accel.sh@20 -- # read -r var val 00:31:08.705 16:08:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:08.705 16:08:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:31:08.705 16:08:12 -- accel/accel.sh@12 -- # build_accel_config 00:31:08.705 16:08:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:08.705 16:08:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:08.705 16:08:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:08.705 16:08:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:08.705 16:08:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:08.705 16:08:12 -- accel/accel.sh@41 -- # local IFS=, 00:31:08.705 16:08:12 -- accel/accel.sh@42 -- # jq -r . 00:31:08.705 [2024-07-22 16:08:12.592312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:08.705 [2024-07-22 16:08:12.592481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65919 ] 00:31:08.705 [2024-07-22 16:08:12.769293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.963 [2024-07-22 16:08:13.065561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=0x1 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=decompress 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=software 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@23 -- # accel_module=software 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=32 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=32 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=2 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val=Yes 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:09.221 16:08:13 -- accel/accel.sh@21 -- # val= 00:31:09.221 16:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # IFS=: 00:31:09.221 16:08:13 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@21 -- # val= 00:31:11.123 16:08:15 -- accel/accel.sh@22 -- # case "$var" in 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # IFS=: 00:31:11.123 16:08:15 -- accel/accel.sh@20 -- # read -r var val 00:31:11.123 16:08:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:31:11.123 16:08:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:31:11.123 16:08:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:11.123 00:31:11.123 real 0m5.678s 00:31:11.123 user 0m4.955s 00:31:11.123 sys 0m0.540s 00:31:11.123 16:08:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.123 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.123 ************************************ 00:31:11.123 END TEST accel_decomp_mthread 00:31:11.123 ************************************ 00:31:11.382 16:08:15 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:11.382 16:08:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:31:11.382 16:08:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:11.382 16:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.382 ************************************ 00:31:11.382 START TEST accel_deomp_full_mthread 00:31:11.383 ************************************ 00:31:11.383 16:08:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:11.383 16:08:15 -- accel/accel.sh@16 -- # local accel_opc 00:31:11.383 16:08:15 -- accel/accel.sh@17 -- # local accel_module 00:31:11.383 16:08:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:11.383 16:08:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:11.383 16:08:15 -- accel/accel.sh@12 -- # build_accel_config 00:31:11.383 16:08:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:11.383 16:08:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:11.383 16:08:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:11.383 16:08:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:11.383 16:08:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:11.383 16:08:15 -- accel/accel.sh@41 -- # local IFS=, 00:31:11.383 16:08:15 -- accel/accel.sh@42 -- # jq -r . 00:31:11.383 [2024-07-22 16:08:15.443655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:11.383 [2024-07-22 16:08:15.443826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65971 ] 00:31:11.383 [2024-07-22 16:08:15.611182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.641 [2024-07-22 16:08:15.897187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.177 16:08:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:31:14.177 00:31:14.177 SPDK Configuration: 00:31:14.177 Core mask: 0x1 00:31:14.177 00:31:14.177 Accel Perf Configuration: 00:31:14.177 Workload Type: decompress 00:31:14.177 Transfer size: 111250 bytes 00:31:14.177 Vector count 1 00:31:14.177 Module: software 00:31:14.177 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:14.177 Queue depth: 32 00:31:14.177 Allocate depth: 32 00:31:14.177 # threads/core: 2 00:31:14.177 Run time: 1 seconds 00:31:14.177 Verify: Yes 00:31:14.177 00:31:14.177 Running for 1 seconds... 00:31:14.177 00:31:14.177 Core,Thread Transfers Bandwidth Failed Miscompares 00:31:14.177 ------------------------------------------------------------------------------------ 00:31:14.177 0,1 2208/s 91 MiB/s 0 0 00:31:14.177 0,0 2144/s 88 MiB/s 0 0 00:31:14.177 ==================================================================================== 00:31:14.177 Total 4352/s 461 MiB/s 0 0' 00:31:14.177 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.177 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.177 16:08:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:14.177 16:08:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:31:14.177 16:08:18 -- accel/accel.sh@12 -- # build_accel_config 00:31:14.177 16:08:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:14.177 16:08:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:14.177 16:08:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:14.177 16:08:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:14.177 16:08:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:14.177 16:08:18 -- accel/accel.sh@41 -- # local IFS=, 00:31:14.177 16:08:18 -- accel/accel.sh@42 -- # jq -r . 00:31:14.177 [2024-07-22 16:08:18.238600] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:14.177 [2024-07-22 16:08:18.238781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66003 ] 00:31:14.177 [2024-07-22 16:08:18.420293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.747 [2024-07-22 16:08:18.762435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val=0x1 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val=decompress 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val='111250 bytes' 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:18 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:18 -- accel/accel.sh@21 -- # val=software 00:31:14.747 16:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:18 -- accel/accel.sh@23 -- # accel_module=software 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val=32 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val=32 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val=2 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val=Yes 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:14.747 16:08:19 -- accel/accel.sh@21 -- # val= 00:31:14.747 16:08:19 -- accel/accel.sh@22 -- # case "$var" in 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # IFS=: 00:31:14.747 16:08:19 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 16:08:21 -- accel/accel.sh@21 -- # val= 00:31:17.276 16:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # IFS=: 00:31:17.276 16:08:21 -- accel/accel.sh@20 -- # read -r var val 00:31:17.276 ************************************ 00:31:17.276 END TEST accel_deomp_full_mthread 00:31:17.276 ************************************ 00:31:17.276 16:08:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:31:17.276 16:08:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:31:17.276 16:08:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:17.276 00:31:17.276 real 0m5.675s 00:31:17.276 user 0m4.936s 00:31:17.276 sys 0m0.554s 00:31:17.276 16:08:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.276 16:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:17.276 16:08:21 -- accel/accel.sh@116 -- # [[ n == y ]] 00:31:17.276 16:08:21 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:31:17.276 16:08:21 -- accel/accel.sh@129 -- # build_accel_config 00:31:17.276 16:08:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:17.276 16:08:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.276 16:08:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:31:17.276 16:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:17.276 16:08:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:31:17.276 16:08:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:31:17.276 16:08:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:31:17.276 16:08:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:31:17.276 16:08:21 -- accel/accel.sh@41 -- # local IFS=, 00:31:17.276 16:08:21 -- accel/accel.sh@42 -- # jq -r . 00:31:17.276 ************************************ 00:31:17.276 START TEST accel_dif_functional_tests 00:31:17.276 ************************************ 00:31:17.276 16:08:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:31:17.276 [2024-07-22 16:08:21.209444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:17.276 [2024-07-22 16:08:21.209618] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66056 ] 00:31:17.276 [2024-07-22 16:08:21.385128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:17.541 [2024-07-22 16:08:21.667288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.541 [2024-07-22 16:08:21.667384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.541 [2024-07-22 16:08:21.667396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.799 00:31:17.799 00:31:17.799 CUnit - A unit testing framework for C - Version 2.1-3 00:31:17.799 http://cunit.sourceforge.net/ 00:31:17.799 00:31:17.799 00:31:17.799 Suite: accel_dif 00:31:17.799 Test: verify: DIF generated, GUARD check ...passed 00:31:17.799 Test: verify: DIF generated, APPTAG check ...passed 00:31:17.799 Test: verify: DIF generated, REFTAG check ...passed 00:31:17.799 Test: verify: DIF not generated, GUARD check ...passed 00:31:17.799 Test: verify: DIF not generated, APPTAG check ...passed 00:31:17.799 Test: verify: DIF not generated, REFTAG check ...passed 00:31:17.799 Test: verify: APPTAG correct, APPTAG check ...passed 00:31:17.799 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:31:17.799 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:31:17.799 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:31:17.799 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:31:17.799 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 16:08:22.030124] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:31:17.799 [2024-07-22 16:08:22.030233] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:31:17.799 [2024-07-22 16:08:22.030297] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:31:17.799 [2024-07-22 16:08:22.030349] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:31:17.799 [2024-07-22 16:08:22.030395] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:31:17.799 [2024-07-22 16:08:22.030427] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:31:17.799 [2024-07-22 16:08:22.030539] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:31:17.799 passed 00:31:17.799 Test: generate copy: DIF generated, GUARD check ...passed 00:31:17.799 Test: generate copy: DIF generated, APTTAG check ...passed 00:31:17.799 Test: generate copy: DIF generated, REFTAG check ...passed 00:31:17.799 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-07-22 16:08:22.030743] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:31:17.799 passed 00:31:17.799 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:31:17.799 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:31:17.799 Test: generate copy: iovecs-len validate ...[2024-07-22 16:08:22.031247] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:31:17.799 passed 00:31:17.799 Test: generate copy: buffer alignment validate ...passed 00:31:17.799 00:31:17.799 Run Summary: Type Total Ran Passed Failed Inactive 00:31:17.799 suites 1 1 n/a 0 0 00:31:17.799 tests 20 20 20 0 0 00:31:17.799 asserts 204 204 204 0 n/a 00:31:17.799 00:31:17.799 Elapsed time = 0.005 seconds 00:31:19.190 00:31:19.190 real 0m2.215s 00:31:19.190 user 0m4.133s 00:31:19.190 sys 0m0.362s 00:31:19.190 16:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.190 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.190 ************************************ 00:31:19.190 END TEST accel_dif_functional_tests 00:31:19.190 ************************************ 00:31:19.190 00:31:19.190 real 2m3.721s 00:31:19.190 user 2m11.344s 00:31:19.190 sys 0m13.319s 00:31:19.190 16:08:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:19.190 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.190 ************************************ 00:31:19.190 END TEST accel 00:31:19.190 ************************************ 00:31:19.190 16:08:23 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:31:19.190 16:08:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:19.190 16:08:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:19.190 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.190 ************************************ 00:31:19.190 START TEST accel_rpc 00:31:19.190 ************************************ 00:31:19.190 16:08:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:31:19.449 * Looking for test storage... 00:31:19.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:31:19.449 16:08:23 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:31:19.449 16:08:23 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66137 00:31:19.449 16:08:23 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:31:19.449 16:08:23 -- accel/accel_rpc.sh@15 -- # waitforlisten 66137 00:31:19.449 16:08:23 -- common/autotest_common.sh@819 -- # '[' -z 66137 ']' 00:31:19.449 16:08:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.449 16:08:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:19.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.449 16:08:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.449 16:08:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:19.449 16:08:23 -- common/autotest_common.sh@10 -- # set +x 00:31:19.449 [2024-07-22 16:08:23.601453] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:19.449 [2024-07-22 16:08:23.601644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66137 ] 00:31:19.708 [2024-07-22 16:08:23.813943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.965 [2024-07-22 16:08:24.114083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:19.965 [2024-07-22 16:08:24.114331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.530 16:08:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:20.530 16:08:24 -- common/autotest_common.sh@852 -- # return 0 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:31:20.530 16:08:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:20.530 16:08:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.530 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:20.530 ************************************ 00:31:20.530 START TEST accel_assign_opcode 00:31:20.530 ************************************ 00:31:20.530 16:08:24 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:31:20.530 16:08:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.530 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:20.530 [2024-07-22 16:08:24.643090] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:31:20.530 16:08:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:31:20.530 16:08:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.530 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:20.530 [2024-07-22 16:08:24.651022] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:31:20.530 16:08:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.530 16:08:24 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:31:20.530 16:08:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.530 16:08:24 -- common/autotest_common.sh@10 -- # set +x 00:31:21.463 16:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.463 16:08:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:31:21.463 16:08:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:31:21.463 16:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.463 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:21.463 16:08:25 -- accel/accel_rpc.sh@42 -- # grep software 00:31:21.463 16:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.463 software 00:31:21.463 00:31:21.463 real 0m0.898s 00:31:21.463 user 0m0.012s 00:31:21.463 sys 0m0.012s 00:31:21.463 16:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.463 16:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:21.463 ************************************ 00:31:21.463 END TEST accel_assign_opcode 00:31:21.463 ************************************ 00:31:21.463 16:08:25 -- accel/accel_rpc.sh@55 -- # killprocess 66137 00:31:21.463 16:08:25 -- common/autotest_common.sh@926 -- # '[' -z 66137 ']' 00:31:21.463 16:08:25 -- common/autotest_common.sh@930 -- # kill -0 66137 00:31:21.463 16:08:25 -- common/autotest_common.sh@931 -- # uname 00:31:21.463 16:08:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.463 16:08:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66137 00:31:21.463 killing process with pid 66137 00:31:21.463 16:08:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:21.463 16:08:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:21.463 16:08:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66137' 00:31:21.463 16:08:25 -- common/autotest_common.sh@945 -- # kill 66137 00:31:21.463 16:08:25 -- common/autotest_common.sh@950 -- # wait 66137 00:31:23.990 ************************************ 00:31:23.990 END TEST accel_rpc 00:31:23.990 ************************************ 00:31:23.990 00:31:23.990 real 0m4.662s 00:31:23.990 user 0m4.491s 00:31:23.990 sys 0m0.768s 00:31:23.990 16:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.990 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:23.990 16:08:28 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:23.990 16:08:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:23.990 16:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.990 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:23.990 ************************************ 00:31:23.990 START TEST app_cmdline 00:31:23.990 ************************************ 00:31:23.990 16:08:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:31:23.990 * Looking for test storage... 00:31:23.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:23.990 16:08:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:31:23.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.990 16:08:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=66258 00:31:23.990 16:08:28 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:31:23.990 16:08:28 -- app/cmdline.sh@18 -- # waitforlisten 66258 00:31:23.990 16:08:28 -- common/autotest_common.sh@819 -- # '[' -z 66258 ']' 00:31:23.990 16:08:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.990 16:08:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:23.990 16:08:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.990 16:08:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:23.990 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:24.248 [2024-07-22 16:08:28.312858] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:24.248 [2024-07-22 16:08:28.313106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66258 ] 00:31:24.248 [2024-07-22 16:08:28.491697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.568 [2024-07-22 16:08:28.778910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:24.568 [2024-07-22 16:08:28.779191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.942 16:08:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:25.942 16:08:29 -- common/autotest_common.sh@852 -- # return 0 00:31:25.942 16:08:29 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:31:26.200 { 00:31:26.200 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:31:26.200 "fields": { 00:31:26.200 "major": 24, 00:31:26.200 "minor": 1, 00:31:26.200 "patch": 1, 00:31:26.200 "suffix": "-pre", 00:31:26.200 "commit": "dbef7efac" 00:31:26.200 } 00:31:26.200 } 00:31:26.200 16:08:30 -- app/cmdline.sh@22 -- # expected_methods=() 00:31:26.200 16:08:30 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:31:26.200 16:08:30 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:31:26.200 16:08:30 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:31:26.200 16:08:30 -- app/cmdline.sh@26 -- # sort 00:31:26.200 16:08:30 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:31:26.200 16:08:30 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:31:26.200 16:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.200 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:31:26.200 16:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.200 16:08:30 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:31:26.200 16:08:30 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:31:26.200 16:08:30 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:26.200 16:08:30 -- common/autotest_common.sh@640 -- # local es=0 00:31:26.200 16:08:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:26.200 16:08:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.200 16:08:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:26.200 16:08:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.200 16:08:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:26.200 16:08:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.200 16:08:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:26.200 16:08:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.200 16:08:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:26.200 16:08:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:31:26.459 request: 00:31:26.459 { 00:31:26.459 "method": "env_dpdk_get_mem_stats", 00:31:26.459 "req_id": 1 00:31:26.459 } 00:31:26.459 Got JSON-RPC error response 00:31:26.459 response: 00:31:26.459 { 00:31:26.459 "code": -32601, 00:31:26.459 "message": "Method not found" 00:31:26.459 } 00:31:26.459 16:08:30 -- common/autotest_common.sh@643 -- # es=1 00:31:26.459 16:08:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:26.459 16:08:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:26.459 16:08:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:26.459 16:08:30 -- app/cmdline.sh@1 -- # killprocess 66258 00:31:26.459 16:08:30 -- common/autotest_common.sh@926 -- # '[' -z 66258 ']' 00:31:26.459 16:08:30 -- common/autotest_common.sh@930 -- # kill -0 66258 00:31:26.459 16:08:30 -- common/autotest_common.sh@931 -- # uname 00:31:26.459 16:08:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:26.459 16:08:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66258 00:31:26.459 killing process with pid 66258 00:31:26.459 16:08:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:26.459 16:08:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:26.459 16:08:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66258' 00:31:26.459 16:08:30 -- common/autotest_common.sh@945 -- # kill 66258 00:31:26.459 16:08:30 -- common/autotest_common.sh@950 -- # wait 66258 00:31:29.009 00:31:29.009 real 0m5.000s 00:31:29.009 user 0m5.316s 00:31:29.009 sys 0m0.855s 00:31:29.009 16:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:29.009 ************************************ 00:31:29.009 END TEST app_cmdline 00:31:29.009 ************************************ 00:31:29.009 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:29.009 16:08:33 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:29.009 16:08:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:29.009 16:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:29.009 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:29.009 ************************************ 00:31:29.009 START TEST version 00:31:29.009 ************************************ 00:31:29.009 16:08:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:31:29.009 * Looking for test storage... 00:31:29.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:31:29.267 16:08:33 -- app/version.sh@17 -- # get_header_version major 00:31:29.267 16:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:29.267 16:08:33 -- app/version.sh@14 -- # cut -f2 00:31:29.267 16:08:33 -- app/version.sh@14 -- # tr -d '"' 00:31:29.267 16:08:33 -- app/version.sh@17 -- # major=24 00:31:29.267 16:08:33 -- app/version.sh@18 -- # get_header_version minor 00:31:29.267 16:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:29.267 16:08:33 -- app/version.sh@14 -- # tr -d '"' 00:31:29.267 16:08:33 -- app/version.sh@14 -- # cut -f2 00:31:29.267 16:08:33 -- app/version.sh@18 -- # minor=1 00:31:29.267 16:08:33 -- app/version.sh@19 -- # get_header_version patch 00:31:29.267 16:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:29.267 16:08:33 -- app/version.sh@14 -- # cut -f2 00:31:29.267 16:08:33 -- app/version.sh@14 -- # tr -d '"' 00:31:29.267 16:08:33 -- app/version.sh@19 -- # patch=1 00:31:29.267 16:08:33 -- app/version.sh@20 -- # get_header_version suffix 00:31:29.267 16:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:31:29.267 16:08:33 -- app/version.sh@14 -- # cut -f2 00:31:29.267 16:08:33 -- app/version.sh@14 -- # tr -d '"' 00:31:29.267 16:08:33 -- app/version.sh@20 -- # suffix=-pre 00:31:29.267 16:08:33 -- app/version.sh@22 -- # version=24.1 00:31:29.267 16:08:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:31:29.267 16:08:33 -- app/version.sh@25 -- # version=24.1.1 00:31:29.267 16:08:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:31:29.267 16:08:33 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:31:29.267 16:08:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:31:29.267 16:08:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:31:29.267 16:08:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:31:29.267 00:31:29.267 real 0m0.141s 00:31:29.267 user 0m0.075s 00:31:29.267 sys 0m0.101s 00:31:29.267 ************************************ 00:31:29.267 END TEST version 00:31:29.267 ************************************ 00:31:29.267 16:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:29.267 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:29.267 16:08:33 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:31:29.267 16:08:33 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:31:29.267 16:08:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:29.267 16:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:29.267 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:29.267 ************************************ 00:31:29.267 START TEST blockdev_general 00:31:29.267 ************************************ 00:31:29.267 16:08:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:31:29.267 * Looking for test storage... 00:31:29.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:29.267 16:08:33 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:29.267 16:08:33 -- bdev/nbd_common.sh@6 -- # set -e 00:31:29.267 16:08:33 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:29.267 16:08:33 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:29.267 16:08:33 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:29.267 16:08:33 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:29.267 16:08:33 -- bdev/blockdev.sh@18 -- # : 00:31:29.267 16:08:33 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:29.267 16:08:33 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:29.267 16:08:33 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:29.267 16:08:33 -- bdev/blockdev.sh@672 -- # uname -s 00:31:29.267 16:08:33 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:29.267 16:08:33 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:29.267 16:08:33 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:31:29.267 16:08:33 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:29.267 16:08:33 -- bdev/blockdev.sh@682 -- # dek= 00:31:29.267 16:08:33 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:29.267 16:08:33 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:29.267 16:08:33 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:29.267 16:08:33 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:31:29.267 16:08:33 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:31:29.267 16:08:33 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:29.267 16:08:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=66434 00:31:29.267 16:08:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:29.267 16:08:33 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:29.267 16:08:33 -- bdev/blockdev.sh@47 -- # waitforlisten 66434 00:31:29.267 16:08:33 -- common/autotest_common.sh@819 -- # '[' -z 66434 ']' 00:31:29.267 16:08:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:29.267 16:08:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:29.268 16:08:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:29.268 16:08:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:29.268 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:31:29.525 [2024-07-22 16:08:33.547347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:29.525 [2024-07-22 16:08:33.547513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66434 ] 00:31:29.525 [2024-07-22 16:08:33.716489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.783 [2024-07-22 16:08:34.025875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:29.783 [2024-07-22 16:08:34.026141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.349 16:08:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:30.349 16:08:34 -- common/autotest_common.sh@852 -- # return 0 00:31:30.349 16:08:34 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:30.349 16:08:34 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:31:30.349 16:08:34 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:31:30.349 16:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.349 16:08:34 -- common/autotest_common.sh@10 -- # set +x 00:31:31.282 [2024-07-22 16:08:35.406372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:31.282 [2024-07-22 16:08:35.406451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:31.282 00:31:31.282 [2024-07-22 16:08:35.414315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:31.282 [2024-07-22 16:08:35.414366] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:31.282 00:31:31.282 Malloc0 00:31:31.282 Malloc1 00:31:31.282 Malloc2 00:31:31.539 Malloc3 00:31:31.539 Malloc4 00:31:31.539 Malloc5 00:31:31.539 Malloc6 00:31:31.539 Malloc7 00:31:31.797 Malloc8 00:31:31.797 Malloc9 00:31:31.797 [2024-07-22 16:08:35.856958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:31.797 [2024-07-22 16:08:35.857050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:31.797 [2024-07-22 16:08:35.857085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:31:31.797 [2024-07-22 16:08:35.857110] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:31.797 [2024-07-22 16:08:35.859907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:31.797 [2024-07-22 16:08:35.859954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:31:31.797 TestPT 00:31:31.797 16:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.797 16:08:35 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:31:31.797 5000+0 records in 00:31:31.797 5000+0 records out 00:31:31.797 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0217514 s, 471 MB/s 00:31:31.797 16:08:35 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:31:31.798 16:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 AIO0 00:31:31.798 16:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.798 16:08:35 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:31.798 16:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 16:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.798 16:08:35 -- bdev/blockdev.sh@738 -- # cat 00:31:31.798 16:08:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:31.798 16:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 16:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.798 16:08:35 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:31.798 16:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 16:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.798 16:08:36 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:31.798 16:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:36 -- common/autotest_common.sh@10 -- # set +x 00:31:31.798 16:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.798 16:08:36 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:31.798 16:08:36 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:31.798 16:08:36 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:31.798 16:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.798 16:08:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.057 16:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.057 16:08:36 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:32.057 16:08:36 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:32.058 16:08:36 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5095685b-eef3-5e98-928b-283e678e9700"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5095685b-eef3-5e98-928b-283e678e9700",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "84e0987f-0e31-513a-b8c6-b8407aafb8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "84e0987f-0e31-513a-b8c6-b8407aafb8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2a5853b-eeae-5cd2-8b5e-77debd17fc87"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2a5853b-eeae-5cd2-8b5e-77debd17fc87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a527e21e-f2eb-5894-82c4-7cdde71731a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a527e21e-f2eb-5894-82c4-7cdde71731a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "22de300d-0a34-5792-8b10-596203bfca03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "22de300d-0a34-5792-8b10-596203bfca03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "48174eb4-79ff-563c-9779-382754ac47d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48174eb4-79ff-563c-9779-382754ac47d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8b4944d7-8f42-5d37-a225-80d890d59c1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b4944d7-8f42-5d37-a225-80d890d59c1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0fb3ea35-c2ae-5376-bd11-27b464d998b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fb3ea35-c2ae-5376-bd11-27b464d998b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "57b8d027-60e3-5469-8442-31c658f4e72d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57b8d027-60e3-5469-8442-31c658f4e72d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a0472dad-482b-5d2d-900e-55316b26d197"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0472dad-482b-5d2d-900e-55316b26d197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "05d9c3cc-2461-4cd8-953f-d0b09f4cc743"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b947b01b-6074-4bdc-9142-b5967846664c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5236a70f-e527-4a54-9bf5-92ce3e6fd356",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "789eff3a-9e39-41b9-aa23-e890ee257d85"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e465cb4-55ce-4ba9-92f4-52950eab0751",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "206112d9-bd20-435b-bd41-0a2881fefdd9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "93def095-d54c-46f9-93c6-ec09849d919d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4cc9bf7f-222d-4142-a11e-19a69951bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9355d2f3-492b-47d7-b9ca-21188519c7d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2b15f40a-3a0d-4da2-9159-b889a55625b3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2b15f40a-3a0d-4da2-9159-b889a55625b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:31:32.058 16:08:36 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:32.058 16:08:36 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:31:32.058 16:08:36 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:32.058 16:08:36 -- bdev/blockdev.sh@752 -- # killprocess 66434 00:31:32.058 16:08:36 -- common/autotest_common.sh@926 -- # '[' -z 66434 ']' 00:31:32.058 16:08:36 -- common/autotest_common.sh@930 -- # kill -0 66434 00:31:32.058 16:08:36 -- common/autotest_common.sh@931 -- # uname 00:31:32.058 16:08:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:32.058 16:08:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66434 00:31:32.058 16:08:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:32.058 16:08:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:32.058 killing process with pid 66434 00:31:32.058 16:08:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66434' 00:31:32.058 16:08:36 -- common/autotest_common.sh@945 -- # kill 66434 00:31:32.058 16:08:36 -- common/autotest_common.sh@950 -- # wait 66434 00:31:36.245 16:08:39 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:36.245 16:08:39 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:31:36.245 16:08:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:36.245 16:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:36.245 16:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:36.245 ************************************ 00:31:36.245 START TEST bdev_hello_world 00:31:36.245 ************************************ 00:31:36.245 16:08:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:31:36.245 [2024-07-22 16:08:39.924058] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:36.245 [2024-07-22 16:08:39.924218] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66519 ] 00:31:36.245 [2024-07-22 16:08:40.089715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.245 [2024-07-22 16:08:40.390524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.812 [2024-07-22 16:08:40.840711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:36.812 [2024-07-22 16:08:40.840816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:36.812 [2024-07-22 16:08:40.848680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:36.812 [2024-07-22 16:08:40.848748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:36.812 [2024-07-22 16:08:40.856702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:36.812 [2024-07-22 16:08:40.856786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:36.812 [2024-07-22 16:08:40.856807] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:36.812 [2024-07-22 16:08:41.058431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:36.812 [2024-07-22 16:08:41.058553] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:36.812 [2024-07-22 16:08:41.058584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:31:36.812 [2024-07-22 16:08:41.058608] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:36.812 [2024-07-22 16:08:41.061769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:36.812 [2024-07-22 16:08:41.061823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:31:37.377 [2024-07-22 16:08:41.408597] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:37.377 [2024-07-22 16:08:41.408726] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:31:37.377 [2024-07-22 16:08:41.408800] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:37.377 [2024-07-22 16:08:41.408913] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:37.377 [2024-07-22 16:08:41.409084] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:37.377 [2024-07-22 16:08:41.409121] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:37.377 [2024-07-22 16:08:41.409194] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:37.377 00:31:37.377 [2024-07-22 16:08:41.409239] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:39.902 00:31:39.902 real 0m3.814s 00:31:39.902 user 0m3.122s 00:31:39.902 sys 0m0.549s 00:31:39.902 16:08:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.902 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:31:39.902 ************************************ 00:31:39.902 END TEST bdev_hello_world 00:31:39.902 ************************************ 00:31:39.902 16:08:43 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:39.902 16:08:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:39.902 16:08:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:39.902 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:31:39.902 ************************************ 00:31:39.902 START TEST bdev_bounds 00:31:39.902 ************************************ 00:31:39.902 16:08:43 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:39.902 16:08:43 -- bdev/blockdev.sh@288 -- # bdevio_pid=66583 00:31:39.902 16:08:43 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:39.902 16:08:43 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:39.902 Process bdevio pid: 66583 00:31:39.902 16:08:43 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 66583' 00:31:39.902 16:08:43 -- bdev/blockdev.sh@291 -- # waitforlisten 66583 00:31:39.902 16:08:43 -- common/autotest_common.sh@819 -- # '[' -z 66583 ']' 00:31:39.902 16:08:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.902 16:08:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:39.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.902 16:08:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.902 16:08:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:39.902 16:08:43 -- common/autotest_common.sh@10 -- # set +x 00:31:39.902 [2024-07-22 16:08:43.805085] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:39.902 [2024-07-22 16:08:43.805796] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66583 ] 00:31:39.902 [2024-07-22 16:08:43.983802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.160 [2024-07-22 16:08:44.281716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.160 [2024-07-22 16:08:44.281845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.160 [2024-07-22 16:08:44.281868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.725 [2024-07-22 16:08:44.714285] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:40.725 [2024-07-22 16:08:44.714392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:40.725 [2024-07-22 16:08:44.722149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:40.726 [2024-07-22 16:08:44.722210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:40.726 [2024-07-22 16:08:44.730179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:40.726 [2024-07-22 16:08:44.730230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:40.726 [2024-07-22 16:08:44.730248] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:40.726 [2024-07-22 16:08:44.960197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:40.726 [2024-07-22 16:08:44.960353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:40.726 [2024-07-22 16:08:44.960414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:31:40.726 [2024-07-22 16:08:44.960440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:40.726 [2024-07-22 16:08:44.964808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:40.726 [2024-07-22 16:08:44.964869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:31:41.292 16:08:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:41.292 16:08:45 -- common/autotest_common.sh@852 -- # return 0 00:31:41.292 16:08:45 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:41.549 I/O targets: 00:31:41.549 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:31:41.549 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:31:41.549 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:31:41.549 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:31:41.549 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:31:41.549 raid0: 131072 blocks of 512 bytes (64 MiB) 00:31:41.549 concat0: 131072 blocks of 512 bytes (64 MiB) 00:31:41.549 raid1: 65536 blocks of 512 bytes (32 MiB) 00:31:41.549 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:31:41.549 00:31:41.549 00:31:41.549 CUnit - A unit testing framework for C - Version 2.1-3 00:31:41.549 http://cunit.sourceforge.net/ 00:31:41.549 00:31:41.549 00:31:41.549 Suite: bdevio tests on: AIO0 00:31:41.549 Test: blockdev write read block ...passed 00:31:41.549 Test: blockdev write zeroes read block ...passed 00:31:41.549 Test: blockdev write zeroes read no split ...passed 00:31:41.549 Test: blockdev write zeroes read split ...passed 00:31:41.549 Test: blockdev write zeroes read split partial ...passed 00:31:41.549 Test: blockdev reset ...passed 00:31:41.549 Test: blockdev write read 8 blocks ...passed 00:31:41.549 Test: blockdev write read size > 128k ...passed 00:31:41.549 Test: blockdev write read invalid size ...passed 00:31:41.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:41.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:41.549 Test: blockdev write read max offset ...passed 00:31:41.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:41.549 Test: blockdev writev readv 8 blocks ...passed 00:31:41.549 Test: blockdev writev readv 30 x 1block ...passed 00:31:41.549 Test: blockdev writev readv block ...passed 00:31:41.549 Test: blockdev writev readv size > 128k ...passed 00:31:41.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:41.549 Test: blockdev comparev and writev ...passed 00:31:41.549 Test: blockdev nvme passthru rw ...passed 00:31:41.549 Test: blockdev nvme passthru vendor specific ...passed 00:31:41.549 Test: blockdev nvme admin passthru ...passed 00:31:41.549 Test: blockdev copy ...passed 00:31:41.549 Suite: bdevio tests on: raid1 00:31:41.549 Test: blockdev write read block ...passed 00:31:41.549 Test: blockdev write zeroes read block ...passed 00:31:41.549 Test: blockdev write zeroes read no split ...passed 00:31:41.549 Test: blockdev write zeroes read split ...passed 00:31:41.549 Test: blockdev write zeroes read split partial ...passed 00:31:41.549 Test: blockdev reset ...passed 00:31:41.549 Test: blockdev write read 8 blocks ...passed 00:31:41.549 Test: blockdev write read size > 128k ...passed 00:31:41.549 Test: blockdev write read invalid size ...passed 00:31:41.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:41.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:41.549 Test: blockdev write read max offset ...passed 00:31:41.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:41.549 Test: blockdev writev readv 8 blocks ...passed 00:31:41.549 Test: blockdev writev readv 30 x 1block ...passed 00:31:41.549 Test: blockdev writev readv block ...passed 00:31:41.549 Test: blockdev writev readv size > 128k ...passed 00:31:41.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:41.549 Test: blockdev comparev and writev ...passed 00:31:41.549 Test: blockdev nvme passthru rw ...passed 00:31:41.549 Test: blockdev nvme passthru vendor specific ...passed 00:31:41.549 Test: blockdev nvme admin passthru ...passed 00:31:41.549 Test: blockdev copy ...passed 00:31:41.549 Suite: bdevio tests on: concat0 00:31:41.549 Test: blockdev write read block ...passed 00:31:41.549 Test: blockdev write zeroes read block ...passed 00:31:41.549 Test: blockdev write zeroes read no split ...passed 00:31:41.807 Test: blockdev write zeroes read split ...passed 00:31:41.807 Test: blockdev write zeroes read split partial ...passed 00:31:41.807 Test: blockdev reset ...passed 00:31:41.807 Test: blockdev write read 8 blocks ...passed 00:31:41.807 Test: blockdev write read size > 128k ...passed 00:31:41.807 Test: blockdev write read invalid size ...passed 00:31:41.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:41.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:41.807 Test: blockdev write read max offset ...passed 00:31:41.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:41.807 Test: blockdev writev readv 8 blocks ...passed 00:31:41.807 Test: blockdev writev readv 30 x 1block ...passed 00:31:41.807 Test: blockdev writev readv block ...passed 00:31:41.807 Test: blockdev writev readv size > 128k ...passed 00:31:41.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:41.807 Test: blockdev comparev and writev ...passed 00:31:41.807 Test: blockdev nvme passthru rw ...passed 00:31:41.807 Test: blockdev nvme passthru vendor specific ...passed 00:31:41.807 Test: blockdev nvme admin passthru ...passed 00:31:41.807 Test: blockdev copy ...passed 00:31:41.807 Suite: bdevio tests on: raid0 00:31:41.807 Test: blockdev write read block ...passed 00:31:41.807 Test: blockdev write zeroes read block ...passed 00:31:41.807 Test: blockdev write zeroes read no split ...passed 00:31:41.807 Test: blockdev write zeroes read split ...passed 00:31:41.807 Test: blockdev write zeroes read split partial ...passed 00:31:41.807 Test: blockdev reset ...passed 00:31:41.807 Test: blockdev write read 8 blocks ...passed 00:31:41.807 Test: blockdev write read size > 128k ...passed 00:31:41.807 Test: blockdev write read invalid size ...passed 00:31:41.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:41.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:41.807 Test: blockdev write read max offset ...passed 00:31:41.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:41.807 Test: blockdev writev readv 8 blocks ...passed 00:31:41.807 Test: blockdev writev readv 30 x 1block ...passed 00:31:41.807 Test: blockdev writev readv block ...passed 00:31:41.807 Test: blockdev writev readv size > 128k ...passed 00:31:41.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:41.807 Test: blockdev comparev and writev ...passed 00:31:41.807 Test: blockdev nvme passthru rw ...passed 00:31:41.807 Test: blockdev nvme passthru vendor specific ...passed 00:31:41.807 Test: blockdev nvme admin passthru ...passed 00:31:41.807 Test: blockdev copy ...passed 00:31:41.807 Suite: bdevio tests on: TestPT 00:31:41.807 Test: blockdev write read block ...passed 00:31:41.807 Test: blockdev write zeroes read block ...passed 00:31:41.807 Test: blockdev write zeroes read no split ...passed 00:31:41.807 Test: blockdev write zeroes read split ...passed 00:31:41.807 Test: blockdev write zeroes read split partial ...passed 00:31:41.807 Test: blockdev reset ...passed 00:31:41.807 Test: blockdev write read 8 blocks ...passed 00:31:41.807 Test: blockdev write read size > 128k ...passed 00:31:41.807 Test: blockdev write read invalid size ...passed 00:31:41.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:41.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:41.807 Test: blockdev write read max offset ...passed 00:31:41.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:41.807 Test: blockdev writev readv 8 blocks ...passed 00:31:41.807 Test: blockdev writev readv 30 x 1block ...passed 00:31:41.807 Test: blockdev writev readv block ...passed 00:31:41.807 Test: blockdev writev readv size > 128k ...passed 00:31:41.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:41.807 Test: blockdev comparev and writev ...passed 00:31:41.807 Test: blockdev nvme passthru rw ...passed 00:31:41.807 Test: blockdev nvme passthru vendor specific ...passed 00:31:41.807 Test: blockdev nvme admin passthru ...passed 00:31:41.807 Test: blockdev copy ...passed 00:31:41.807 Suite: bdevio tests on: Malloc2p7 00:31:41.807 Test: blockdev write read block ...passed 00:31:41.807 Test: blockdev write zeroes read block ...passed 00:31:41.807 Test: blockdev write zeroes read no split ...passed 00:31:41.807 Test: blockdev write zeroes read split ...passed 00:31:42.066 Test: blockdev write zeroes read split partial ...passed 00:31:42.066 Test: blockdev reset ...passed 00:31:42.066 Test: blockdev write read 8 blocks ...passed 00:31:42.066 Test: blockdev write read size > 128k ...passed 00:31:42.066 Test: blockdev write read invalid size ...passed 00:31:42.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.066 Test: blockdev write read max offset ...passed 00:31:42.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.066 Test: blockdev writev readv 8 blocks ...passed 00:31:42.066 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.066 Test: blockdev writev readv block ...passed 00:31:42.066 Test: blockdev writev readv size > 128k ...passed 00:31:42.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.066 Test: blockdev comparev and writev ...passed 00:31:42.066 Test: blockdev nvme passthru rw ...passed 00:31:42.066 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.066 Test: blockdev nvme admin passthru ...passed 00:31:42.066 Test: blockdev copy ...passed 00:31:42.066 Suite: bdevio tests on: Malloc2p6 00:31:42.066 Test: blockdev write read block ...passed 00:31:42.066 Test: blockdev write zeroes read block ...passed 00:31:42.066 Test: blockdev write zeroes read no split ...passed 00:31:42.066 Test: blockdev write zeroes read split ...passed 00:31:42.066 Test: blockdev write zeroes read split partial ...passed 00:31:42.066 Test: blockdev reset ...passed 00:31:42.066 Test: blockdev write read 8 blocks ...passed 00:31:42.066 Test: blockdev write read size > 128k ...passed 00:31:42.066 Test: blockdev write read invalid size ...passed 00:31:42.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.066 Test: blockdev write read max offset ...passed 00:31:42.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.066 Test: blockdev writev readv 8 blocks ...passed 00:31:42.066 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.066 Test: blockdev writev readv block ...passed 00:31:42.066 Test: blockdev writev readv size > 128k ...passed 00:31:42.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.066 Test: blockdev comparev and writev ...passed 00:31:42.066 Test: blockdev nvme passthru rw ...passed 00:31:42.066 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.066 Test: blockdev nvme admin passthru ...passed 00:31:42.066 Test: blockdev copy ...passed 00:31:42.066 Suite: bdevio tests on: Malloc2p5 00:31:42.066 Test: blockdev write read block ...passed 00:31:42.066 Test: blockdev write zeroes read block ...passed 00:31:42.066 Test: blockdev write zeroes read no split ...passed 00:31:42.066 Test: blockdev write zeroes read split ...passed 00:31:42.066 Test: blockdev write zeroes read split partial ...passed 00:31:42.066 Test: blockdev reset ...passed 00:31:42.066 Test: blockdev write read 8 blocks ...passed 00:31:42.066 Test: blockdev write read size > 128k ...passed 00:31:42.066 Test: blockdev write read invalid size ...passed 00:31:42.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.066 Test: blockdev write read max offset ...passed 00:31:42.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.066 Test: blockdev writev readv 8 blocks ...passed 00:31:42.066 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.066 Test: blockdev writev readv block ...passed 00:31:42.066 Test: blockdev writev readv size > 128k ...passed 00:31:42.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.066 Test: blockdev comparev and writev ...passed 00:31:42.066 Test: blockdev nvme passthru rw ...passed 00:31:42.066 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.066 Test: blockdev nvme admin passthru ...passed 00:31:42.066 Test: blockdev copy ...passed 00:31:42.066 Suite: bdevio tests on: Malloc2p4 00:31:42.066 Test: blockdev write read block ...passed 00:31:42.066 Test: blockdev write zeroes read block ...passed 00:31:42.066 Test: blockdev write zeroes read no split ...passed 00:31:42.066 Test: blockdev write zeroes read split ...passed 00:31:42.066 Test: blockdev write zeroes read split partial ...passed 00:31:42.066 Test: blockdev reset ...passed 00:31:42.066 Test: blockdev write read 8 blocks ...passed 00:31:42.066 Test: blockdev write read size > 128k ...passed 00:31:42.066 Test: blockdev write read invalid size ...passed 00:31:42.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.066 Test: blockdev write read max offset ...passed 00:31:42.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.066 Test: blockdev writev readv 8 blocks ...passed 00:31:42.066 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.066 Test: blockdev writev readv block ...passed 00:31:42.066 Test: blockdev writev readv size > 128k ...passed 00:31:42.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.066 Test: blockdev comparev and writev ...passed 00:31:42.066 Test: blockdev nvme passthru rw ...passed 00:31:42.066 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.066 Test: blockdev nvme admin passthru ...passed 00:31:42.066 Test: blockdev copy ...passed 00:31:42.066 Suite: bdevio tests on: Malloc2p3 00:31:42.066 Test: blockdev write read block ...passed 00:31:42.066 Test: blockdev write zeroes read block ...passed 00:31:42.067 Test: blockdev write zeroes read no split ...passed 00:31:42.067 Test: blockdev write zeroes read split ...passed 00:31:42.325 Test: blockdev write zeroes read split partial ...passed 00:31:42.325 Test: blockdev reset ...passed 00:31:42.325 Test: blockdev write read 8 blocks ...passed 00:31:42.325 Test: blockdev write read size > 128k ...passed 00:31:42.325 Test: blockdev write read invalid size ...passed 00:31:42.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.325 Test: blockdev write read max offset ...passed 00:31:42.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.325 Test: blockdev writev readv 8 blocks ...passed 00:31:42.325 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.325 Test: blockdev writev readv block ...passed 00:31:42.325 Test: blockdev writev readv size > 128k ...passed 00:31:42.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.325 Test: blockdev comparev and writev ...passed 00:31:42.325 Test: blockdev nvme passthru rw ...passed 00:31:42.325 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.325 Test: blockdev nvme admin passthru ...passed 00:31:42.325 Test: blockdev copy ...passed 00:31:42.325 Suite: bdevio tests on: Malloc2p2 00:31:42.325 Test: blockdev write read block ...passed 00:31:42.325 Test: blockdev write zeroes read block ...passed 00:31:42.325 Test: blockdev write zeroes read no split ...passed 00:31:42.325 Test: blockdev write zeroes read split ...passed 00:31:42.325 Test: blockdev write zeroes read split partial ...passed 00:31:42.325 Test: blockdev reset ...passed 00:31:42.325 Test: blockdev write read 8 blocks ...passed 00:31:42.325 Test: blockdev write read size > 128k ...passed 00:31:42.325 Test: blockdev write read invalid size ...passed 00:31:42.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.325 Test: blockdev write read max offset ...passed 00:31:42.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.325 Test: blockdev writev readv 8 blocks ...passed 00:31:42.325 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.325 Test: blockdev writev readv block ...passed 00:31:42.325 Test: blockdev writev readv size > 128k ...passed 00:31:42.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.325 Test: blockdev comparev and writev ...passed 00:31:42.325 Test: blockdev nvme passthru rw ...passed 00:31:42.325 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.325 Test: blockdev nvme admin passthru ...passed 00:31:42.325 Test: blockdev copy ...passed 00:31:42.325 Suite: bdevio tests on: Malloc2p1 00:31:42.325 Test: blockdev write read block ...passed 00:31:42.325 Test: blockdev write zeroes read block ...passed 00:31:42.325 Test: blockdev write zeroes read no split ...passed 00:31:42.325 Test: blockdev write zeroes read split ...passed 00:31:42.325 Test: blockdev write zeroes read split partial ...passed 00:31:42.325 Test: blockdev reset ...passed 00:31:42.325 Test: blockdev write read 8 blocks ...passed 00:31:42.325 Test: blockdev write read size > 128k ...passed 00:31:42.325 Test: blockdev write read invalid size ...passed 00:31:42.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.325 Test: blockdev write read max offset ...passed 00:31:42.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.325 Test: blockdev writev readv 8 blocks ...passed 00:31:42.325 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.325 Test: blockdev writev readv block ...passed 00:31:42.325 Test: blockdev writev readv size > 128k ...passed 00:31:42.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.325 Test: blockdev comparev and writev ...passed 00:31:42.325 Test: blockdev nvme passthru rw ...passed 00:31:42.325 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.325 Test: blockdev nvme admin passthru ...passed 00:31:42.325 Test: blockdev copy ...passed 00:31:42.325 Suite: bdevio tests on: Malloc2p0 00:31:42.325 Test: blockdev write read block ...passed 00:31:42.325 Test: blockdev write zeroes read block ...passed 00:31:42.325 Test: blockdev write zeroes read no split ...passed 00:31:42.325 Test: blockdev write zeroes read split ...passed 00:31:42.325 Test: blockdev write zeroes read split partial ...passed 00:31:42.325 Test: blockdev reset ...passed 00:31:42.325 Test: blockdev write read 8 blocks ...passed 00:31:42.325 Test: blockdev write read size > 128k ...passed 00:31:42.325 Test: blockdev write read invalid size ...passed 00:31:42.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.325 Test: blockdev write read max offset ...passed 00:31:42.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.325 Test: blockdev writev readv 8 blocks ...passed 00:31:42.325 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.325 Test: blockdev writev readv block ...passed 00:31:42.325 Test: blockdev writev readv size > 128k ...passed 00:31:42.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.325 Test: blockdev comparev and writev ...passed 00:31:42.325 Test: blockdev nvme passthru rw ...passed 00:31:42.325 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.325 Test: blockdev nvme admin passthru ...passed 00:31:42.325 Test: blockdev copy ...passed 00:31:42.325 Suite: bdevio tests on: Malloc1p1 00:31:42.325 Test: blockdev write read block ...passed 00:31:42.325 Test: blockdev write zeroes read block ...passed 00:31:42.325 Test: blockdev write zeroes read no split ...passed 00:31:42.584 Test: blockdev write zeroes read split ...passed 00:31:42.584 Test: blockdev write zeroes read split partial ...passed 00:31:42.584 Test: blockdev reset ...passed 00:31:42.584 Test: blockdev write read 8 blocks ...passed 00:31:42.584 Test: blockdev write read size > 128k ...passed 00:31:42.584 Test: blockdev write read invalid size ...passed 00:31:42.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.584 Test: blockdev write read max offset ...passed 00:31:42.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.584 Test: blockdev writev readv 8 blocks ...passed 00:31:42.584 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.584 Test: blockdev writev readv block ...passed 00:31:42.584 Test: blockdev writev readv size > 128k ...passed 00:31:42.584 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.584 Test: blockdev comparev and writev ...passed 00:31:42.584 Test: blockdev nvme passthru rw ...passed 00:31:42.584 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.584 Test: blockdev nvme admin passthru ...passed 00:31:42.584 Test: blockdev copy ...passed 00:31:42.584 Suite: bdevio tests on: Malloc1p0 00:31:42.584 Test: blockdev write read block ...passed 00:31:42.584 Test: blockdev write zeroes read block ...passed 00:31:42.584 Test: blockdev write zeroes read no split ...passed 00:31:42.584 Test: blockdev write zeroes read split ...passed 00:31:42.584 Test: blockdev write zeroes read split partial ...passed 00:31:42.584 Test: blockdev reset ...passed 00:31:42.584 Test: blockdev write read 8 blocks ...passed 00:31:42.584 Test: blockdev write read size > 128k ...passed 00:31:42.584 Test: blockdev write read invalid size ...passed 00:31:42.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.584 Test: blockdev write read max offset ...passed 00:31:42.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.584 Test: blockdev writev readv 8 blocks ...passed 00:31:42.584 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.584 Test: blockdev writev readv block ...passed 00:31:42.584 Test: blockdev writev readv size > 128k ...passed 00:31:42.584 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.584 Test: blockdev comparev and writev ...passed 00:31:42.584 Test: blockdev nvme passthru rw ...passed 00:31:42.584 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.584 Test: blockdev nvme admin passthru ...passed 00:31:42.584 Test: blockdev copy ...passed 00:31:42.584 Suite: bdevio tests on: Malloc0 00:31:42.584 Test: blockdev write read block ...passed 00:31:42.584 Test: blockdev write zeroes read block ...passed 00:31:42.584 Test: blockdev write zeroes read no split ...passed 00:31:42.585 Test: blockdev write zeroes read split ...passed 00:31:42.585 Test: blockdev write zeroes read split partial ...passed 00:31:42.585 Test: blockdev reset ...passed 00:31:42.585 Test: blockdev write read 8 blocks ...passed 00:31:42.585 Test: blockdev write read size > 128k ...passed 00:31:42.585 Test: blockdev write read invalid size ...passed 00:31:42.585 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:42.585 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:42.585 Test: blockdev write read max offset ...passed 00:31:42.585 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:42.585 Test: blockdev writev readv 8 blocks ...passed 00:31:42.585 Test: blockdev writev readv 30 x 1block ...passed 00:31:42.585 Test: blockdev writev readv block ...passed 00:31:42.585 Test: blockdev writev readv size > 128k ...passed 00:31:42.585 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:42.585 Test: blockdev comparev and writev ...passed 00:31:42.585 Test: blockdev nvme passthru rw ...passed 00:31:42.585 Test: blockdev nvme passthru vendor specific ...passed 00:31:42.585 Test: blockdev nvme admin passthru ...passed 00:31:42.585 Test: blockdev copy ...passed 00:31:42.585 00:31:42.585 Run Summary: Type Total Ran Passed Failed Inactive 00:31:42.585 suites 16 16 n/a 0 0 00:31:42.585 tests 368 368 368 0 0 00:31:42.585 asserts 2224 2224 2224 0 n/a 00:31:42.585 00:31:42.585 Elapsed time = 3.260 seconds 00:31:42.585 0 00:31:42.585 16:08:46 -- bdev/blockdev.sh@293 -- # killprocess 66583 00:31:42.585 16:08:46 -- common/autotest_common.sh@926 -- # '[' -z 66583 ']' 00:31:42.585 16:08:46 -- common/autotest_common.sh@930 -- # kill -0 66583 00:31:42.585 16:08:46 -- common/autotest_common.sh@931 -- # uname 00:31:42.585 16:08:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:42.585 16:08:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66583 00:31:42.585 killing process with pid 66583 00:31:42.585 16:08:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:42.585 16:08:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:42.585 16:08:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66583' 00:31:42.585 16:08:46 -- common/autotest_common.sh@945 -- # kill 66583 00:31:42.585 16:08:46 -- common/autotest_common.sh@950 -- # wait 66583 00:31:45.127 16:08:48 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:45.127 00:31:45.127 real 0m5.104s 00:31:45.127 user 0m12.986s 00:31:45.127 sys 0m0.841s 00:31:45.127 16:08:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.127 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:31:45.127 ************************************ 00:31:45.127 END TEST bdev_bounds 00:31:45.127 ************************************ 00:31:45.127 16:08:48 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:31:45.127 16:08:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:45.127 16:08:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:45.127 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:31:45.127 ************************************ 00:31:45.127 START TEST bdev_nbd 00:31:45.127 ************************************ 00:31:45.127 16:08:48 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:31:45.127 16:08:48 -- bdev/blockdev.sh@298 -- # uname -s 00:31:45.127 16:08:48 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:45.127 16:08:48 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:45.127 16:08:48 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:45.127 16:08:48 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:45.127 16:08:48 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:45.127 16:08:48 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:31:45.127 16:08:48 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:45.127 16:08:48 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:45.127 16:08:48 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:45.127 16:08:48 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:31:45.127 16:08:48 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:45.127 16:08:48 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:45.127 16:08:48 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:45.127 16:08:48 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:45.127 16:08:48 -- bdev/blockdev.sh@316 -- # nbd_pid=66668 00:31:45.127 16:08:48 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:45.127 16:08:48 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:45.127 16:08:48 -- bdev/blockdev.sh@318 -- # waitforlisten 66668 /var/tmp/spdk-nbd.sock 00:31:45.127 16:08:48 -- common/autotest_common.sh@819 -- # '[' -z 66668 ']' 00:31:45.127 16:08:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:45.127 16:08:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:45.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:45.127 16:08:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:45.127 16:08:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:45.127 16:08:48 -- common/autotest_common.sh@10 -- # set +x 00:31:45.127 [2024-07-22 16:08:48.970834] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:45.127 [2024-07-22 16:08:48.971042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.127 [2024-07-22 16:08:49.150722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.385 [2024-07-22 16:08:49.470432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.642 [2024-07-22 16:08:49.897702] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:45.642 [2024-07-22 16:08:49.897871] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:45.642 [2024-07-22 16:08:49.905618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:45.642 [2024-07-22 16:08:49.905737] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:45.642 [2024-07-22 16:08:49.913708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:45.642 [2024-07-22 16:08:49.913769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:45.642 [2024-07-22 16:08:49.913799] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:45.900 [2024-07-22 16:08:50.166459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:45.900 [2024-07-22 16:08:50.166583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.900 [2024-07-22 16:08:50.166621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:31:45.900 [2024-07-22 16:08:50.166639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.900 [2024-07-22 16:08:50.170103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.900 [2024-07-22 16:08:50.170155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:31:46.465 16:08:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:46.465 16:08:50 -- common/autotest_common.sh@852 -- # return 0 00:31:46.465 16:08:50 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@24 -- # local i 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:46.465 16:08:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:31:46.722 16:08:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:46.722 16:08:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:46.980 16:08:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:46.980 16:08:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:46.980 16:08:50 -- common/autotest_common.sh@857 -- # local i 00:31:46.980 16:08:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:46.980 16:08:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:46.980 16:08:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:46.980 16:08:51 -- common/autotest_common.sh@861 -- # break 00:31:46.980 16:08:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:46.980 16:08:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:46.980 16:08:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:46.980 1+0 records in 00:31:46.980 1+0 records out 00:31:46.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331782 s, 12.3 MB/s 00:31:46.980 16:08:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:46.980 16:08:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:46.980 16:08:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:46.980 16:08:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:46.980 16:08:51 -- common/autotest_common.sh@877 -- # return 0 00:31:46.980 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:46.980 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:46.980 16:08:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:47.244 16:08:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:47.244 16:08:51 -- common/autotest_common.sh@857 -- # local i 00:31:47.244 16:08:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:47.244 16:08:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:47.244 16:08:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:47.244 16:08:51 -- common/autotest_common.sh@861 -- # break 00:31:47.244 16:08:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:47.244 16:08:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:47.244 16:08:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.244 1+0 records in 00:31:47.244 1+0 records out 00:31:47.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407176 s, 10.1 MB/s 00:31:47.244 16:08:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.244 16:08:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:47.244 16:08:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.244 16:08:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:47.244 16:08:51 -- common/autotest_common.sh@877 -- # return 0 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:47.244 16:08:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:31:47.503 16:08:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:31:47.503 16:08:51 -- common/autotest_common.sh@857 -- # local i 00:31:47.503 16:08:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:47.503 16:08:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:47.503 16:08:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:31:47.503 16:08:51 -- common/autotest_common.sh@861 -- # break 00:31:47.503 16:08:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:47.503 16:08:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:47.503 16:08:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.503 1+0 records in 00:31:47.503 1+0 records out 00:31:47.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423829 s, 9.7 MB/s 00:31:47.503 16:08:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.503 16:08:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:47.503 16:08:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.503 16:08:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:47.503 16:08:51 -- common/autotest_common.sh@877 -- # return 0 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:47.503 16:08:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:31:47.761 16:08:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:31:47.761 16:08:51 -- common/autotest_common.sh@857 -- # local i 00:31:47.761 16:08:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:47.761 16:08:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:47.761 16:08:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:31:47.761 16:08:51 -- common/autotest_common.sh@861 -- # break 00:31:47.761 16:08:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:47.761 16:08:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:47.761 16:08:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:47.761 1+0 records in 00:31:47.761 1+0 records out 00:31:47.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435721 s, 9.4 MB/s 00:31:47.761 16:08:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.761 16:08:51 -- common/autotest_common.sh@874 -- # size=4096 00:31:47.761 16:08:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:47.761 16:08:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:47.761 16:08:51 -- common/autotest_common.sh@877 -- # return 0 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:47.761 16:08:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:31:48.019 16:08:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:31:48.019 16:08:52 -- common/autotest_common.sh@857 -- # local i 00:31:48.019 16:08:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:48.019 16:08:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:48.019 16:08:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:31:48.019 16:08:52 -- common/autotest_common.sh@861 -- # break 00:31:48.019 16:08:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:48.019 16:08:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:48.019 16:08:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.019 1+0 records in 00:31:48.019 1+0 records out 00:31:48.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406907 s, 10.1 MB/s 00:31:48.019 16:08:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.019 16:08:52 -- common/autotest_common.sh@874 -- # size=4096 00:31:48.019 16:08:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.019 16:08:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:48.019 16:08:52 -- common/autotest_common.sh@877 -- # return 0 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:48.019 16:08:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:31:48.276 16:08:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:31:48.277 16:08:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:31:48.277 16:08:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:31:48.277 16:08:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:31:48.277 16:08:52 -- common/autotest_common.sh@857 -- # local i 00:31:48.277 16:08:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:48.277 16:08:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:48.277 16:08:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:31:48.277 16:08:52 -- common/autotest_common.sh@861 -- # break 00:31:48.277 16:08:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:48.277 16:08:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:48.277 16:08:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.277 1+0 records in 00:31:48.277 1+0 records out 00:31:48.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397351 s, 10.3 MB/s 00:31:48.277 16:08:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.277 16:08:52 -- common/autotest_common.sh@874 -- # size=4096 00:31:48.277 16:08:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.277 16:08:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:48.277 16:08:52 -- common/autotest_common.sh@877 -- # return 0 00:31:48.277 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.277 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:48.277 16:08:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:31:48.534 16:08:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:31:48.534 16:08:52 -- common/autotest_common.sh@857 -- # local i 00:31:48.534 16:08:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:48.534 16:08:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:48.534 16:08:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:31:48.534 16:08:52 -- common/autotest_common.sh@861 -- # break 00:31:48.534 16:08:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:48.534 16:08:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:48.534 16:08:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:48.534 1+0 records in 00:31:48.534 1+0 records out 00:31:48.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544483 s, 7.5 MB/s 00:31:48.534 16:08:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.534 16:08:52 -- common/autotest_common.sh@874 -- # size=4096 00:31:48.534 16:08:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:48.534 16:08:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:48.534 16:08:52 -- common/autotest_common.sh@877 -- # return 0 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:48.534 16:08:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:31:48.792 16:08:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:31:48.792 16:08:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:31:48.792 16:08:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:31:48.792 16:08:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:31:48.792 16:08:53 -- common/autotest_common.sh@857 -- # local i 00:31:48.792 16:08:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:48.792 16:08:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:48.792 16:08:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:31:49.049 16:08:53 -- common/autotest_common.sh@861 -- # break 00:31:49.049 16:08:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.049 1+0 records in 00:31:49.049 1+0 records out 00:31:49.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399599 s, 10.3 MB/s 00:31:49.049 16:08:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.049 16:08:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:49.049 16:08:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.049 16:08:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:49.049 16:08:53 -- common/autotest_common.sh@877 -- # return 0 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:31:49.049 16:08:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:31:49.049 16:08:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:31:49.049 16:08:53 -- common/autotest_common.sh@857 -- # local i 00:31:49.049 16:08:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:31:49.049 16:08:53 -- common/autotest_common.sh@861 -- # break 00:31:49.049 16:08:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:49.049 16:08:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.307 1+0 records in 00:31:49.307 1+0 records out 00:31:49.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528358 s, 7.8 MB/s 00:31:49.307 16:08:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.307 16:08:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:49.307 16:08:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.307 16:08:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:49.307 16:08:53 -- common/autotest_common.sh@877 -- # return 0 00:31:49.307 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.307 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:49.307 16:08:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:31:49.565 16:08:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:31:49.565 16:08:53 -- common/autotest_common.sh@857 -- # local i 00:31:49.565 16:08:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:49.565 16:08:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:49.565 16:08:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:31:49.565 16:08:53 -- common/autotest_common.sh@861 -- # break 00:31:49.565 16:08:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:49.565 16:08:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:49.565 16:08:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.565 1+0 records in 00:31:49.565 1+0 records out 00:31:49.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518361 s, 7.9 MB/s 00:31:49.565 16:08:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.565 16:08:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:49.565 16:08:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.565 16:08:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:49.565 16:08:53 -- common/autotest_common.sh@877 -- # return 0 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:49.565 16:08:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:31:49.867 16:08:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:31:49.867 16:08:53 -- common/autotest_common.sh@857 -- # local i 00:31:49.867 16:08:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:49.867 16:08:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:49.867 16:08:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:31:49.867 16:08:53 -- common/autotest_common.sh@861 -- # break 00:31:49.867 16:08:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:49.867 16:08:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:49.867 16:08:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:49.867 1+0 records in 00:31:49.867 1+0 records out 00:31:49.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487298 s, 8.4 MB/s 00:31:49.867 16:08:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.867 16:08:53 -- common/autotest_common.sh@874 -- # size=4096 00:31:49.867 16:08:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:49.867 16:08:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:49.867 16:08:53 -- common/autotest_common.sh@877 -- # return 0 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:49.867 16:08:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:31:50.151 16:08:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:31:50.151 16:08:54 -- common/autotest_common.sh@857 -- # local i 00:31:50.151 16:08:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:50.151 16:08:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:50.151 16:08:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:31:50.151 16:08:54 -- common/autotest_common.sh@861 -- # break 00:31:50.151 16:08:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:50.151 16:08:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:50.151 16:08:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.151 1+0 records in 00:31:50.151 1+0 records out 00:31:50.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449969 s, 9.1 MB/s 00:31:50.151 16:08:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.151 16:08:54 -- common/autotest_common.sh@874 -- # size=4096 00:31:50.151 16:08:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.151 16:08:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:50.151 16:08:54 -- common/autotest_common.sh@877 -- # return 0 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:50.151 16:08:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:31:50.409 16:08:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:31:50.409 16:08:54 -- common/autotest_common.sh@857 -- # local i 00:31:50.409 16:08:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:50.409 16:08:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:50.409 16:08:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:31:50.409 16:08:54 -- common/autotest_common.sh@861 -- # break 00:31:50.409 16:08:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:50.409 16:08:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:50.409 16:08:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.409 1+0 records in 00:31:50.409 1+0 records out 00:31:50.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968294 s, 4.2 MB/s 00:31:50.409 16:08:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.409 16:08:54 -- common/autotest_common.sh@874 -- # size=4096 00:31:50.409 16:08:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.409 16:08:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:50.409 16:08:54 -- common/autotest_common.sh@877 -- # return 0 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:50.409 16:08:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:31:50.667 16:08:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:31:50.667 16:08:54 -- common/autotest_common.sh@857 -- # local i 00:31:50.667 16:08:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:50.667 16:08:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:50.667 16:08:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:31:50.667 16:08:54 -- common/autotest_common.sh@861 -- # break 00:31:50.667 16:08:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:50.667 16:08:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:50.667 16:08:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.667 1+0 records in 00:31:50.667 1+0 records out 00:31:50.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00193799 s, 2.1 MB/s 00:31:50.667 16:08:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.667 16:08:54 -- common/autotest_common.sh@874 -- # size=4096 00:31:50.667 16:08:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.667 16:08:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:50.667 16:08:54 -- common/autotest_common.sh@877 -- # return 0 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:50.667 16:08:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:31:50.925 16:08:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:31:50.925 16:08:55 -- common/autotest_common.sh@857 -- # local i 00:31:50.925 16:08:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:50.925 16:08:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:50.925 16:08:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:31:50.925 16:08:55 -- common/autotest_common.sh@861 -- # break 00:31:50.925 16:08:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:50.925 16:08:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:50.925 16:08:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.925 1+0 records in 00:31:50.925 1+0 records out 00:31:50.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058474 s, 7.0 MB/s 00:31:50.925 16:08:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.925 16:08:55 -- common/autotest_common.sh@874 -- # size=4096 00:31:50.925 16:08:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.925 16:08:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:50.925 16:08:55 -- common/autotest_common.sh@877 -- # return 0 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:50.925 16:08:55 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:31:51.183 16:08:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:31:51.183 16:08:55 -- common/autotest_common.sh@857 -- # local i 00:31:51.183 16:08:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:51.183 16:08:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:51.183 16:08:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:31:51.183 16:08:55 -- common/autotest_common.sh@861 -- # break 00:31:51.183 16:08:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:51.183 16:08:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:51.183 16:08:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:51.183 1+0 records in 00:31:51.183 1+0 records out 00:31:51.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117465 s, 3.5 MB/s 00:31:51.183 16:08:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.183 16:08:55 -- common/autotest_common.sh@874 -- # size=4096 00:31:51.183 16:08:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:51.183 16:08:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:51.183 16:08:55 -- common/autotest_common.sh@877 -- # return 0 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:31:51.183 16:08:55 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:51.441 16:08:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:51.441 { 00:31:51.442 "nbd_device": "/dev/nbd0", 00:31:51.442 "bdev_name": "Malloc0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd1", 00:31:51.442 "bdev_name": "Malloc1p0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd2", 00:31:51.442 "bdev_name": "Malloc1p1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd3", 00:31:51.442 "bdev_name": "Malloc2p0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd4", 00:31:51.442 "bdev_name": "Malloc2p1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd5", 00:31:51.442 "bdev_name": "Malloc2p2" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd6", 00:31:51.442 "bdev_name": "Malloc2p3" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd7", 00:31:51.442 "bdev_name": "Malloc2p4" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd8", 00:31:51.442 "bdev_name": "Malloc2p5" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd9", 00:31:51.442 "bdev_name": "Malloc2p6" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd10", 00:31:51.442 "bdev_name": "Malloc2p7" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd11", 00:31:51.442 "bdev_name": "TestPT" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd12", 00:31:51.442 "bdev_name": "raid0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd13", 00:31:51.442 "bdev_name": "concat0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd14", 00:31:51.442 "bdev_name": "raid1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd15", 00:31:51.442 "bdev_name": "AIO0" 00:31:51.442 } 00:31:51.442 ]' 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd0", 00:31:51.442 "bdev_name": "Malloc0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd1", 00:31:51.442 "bdev_name": "Malloc1p0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd2", 00:31:51.442 "bdev_name": "Malloc1p1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd3", 00:31:51.442 "bdev_name": "Malloc2p0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd4", 00:31:51.442 "bdev_name": "Malloc2p1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd5", 00:31:51.442 "bdev_name": "Malloc2p2" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd6", 00:31:51.442 "bdev_name": "Malloc2p3" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd7", 00:31:51.442 "bdev_name": "Malloc2p4" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd8", 00:31:51.442 "bdev_name": "Malloc2p5" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd9", 00:31:51.442 "bdev_name": "Malloc2p6" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd10", 00:31:51.442 "bdev_name": "Malloc2p7" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd11", 00:31:51.442 "bdev_name": "TestPT" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd12", 00:31:51.442 "bdev_name": "raid0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd13", 00:31:51.442 "bdev_name": "concat0" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd14", 00:31:51.442 "bdev_name": "raid1" 00:31:51.442 }, 00:31:51.442 { 00:31:51.442 "nbd_device": "/dev/nbd15", 00:31:51.442 "bdev_name": "AIO0" 00:31:51.442 } 00:31:51.442 ]' 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@51 -- # local i 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.442 16:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@41 -- # break 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:51.700 16:08:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.959 16:08:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:52.217 16:08:56 -- bdev/nbd_common.sh@41 -- # break 00:31:52.217 16:08:56 -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.217 16:08:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:52.217 16:08:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@41 -- # break 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:52.503 16:08:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@41 -- # break 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@45 -- # return 0 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:52.761 16:08:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@41 -- # break 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.019 16:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:31:53.277 16:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:31:53.277 16:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:31:53.277 16:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:31:53.277 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.277 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.278 16:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:31:53.278 16:08:57 -- bdev/nbd_common.sh@41 -- # break 00:31:53.278 16:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.278 16:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.278 16:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@41 -- # break 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.536 16:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@41 -- # break 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@45 -- # return 0 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:53.794 16:08:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@41 -- # break 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.052 16:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@41 -- # break 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.310 16:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@41 -- # break 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.568 16:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@41 -- # break 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.826 16:08:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@41 -- # break 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.084 16:08:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@41 -- # break 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.341 16:08:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@41 -- # break 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.599 16:08:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@41 -- # break 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.857 16:09:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@65 -- # true 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@65 -- # count=0 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@122 -- # count=0 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@127 -- # return 0 00:31:56.115 16:09:00 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:31:56.115 16:09:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@12 -- # local i 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:56.116 16:09:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:56.374 /dev/nbd0 00:31:56.374 16:09:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:56.374 16:09:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:56.374 16:09:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:56.374 16:09:00 -- common/autotest_common.sh@857 -- # local i 00:31:56.374 16:09:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:56.374 16:09:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:56.374 16:09:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:56.374 16:09:00 -- common/autotest_common.sh@861 -- # break 00:31:56.374 16:09:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:56.374 16:09:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:56.374 16:09:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.374 1+0 records in 00:31:56.374 1+0 records out 00:31:56.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328641 s, 12.5 MB/s 00:31:56.374 16:09:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.374 16:09:00 -- common/autotest_common.sh@874 -- # size=4096 00:31:56.374 16:09:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.374 16:09:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:56.374 16:09:00 -- common/autotest_common.sh@877 -- # return 0 00:31:56.374 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:56.374 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:56.374 16:09:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:31:56.633 /dev/nbd1 00:31:56.633 16:09:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:56.633 16:09:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:56.633 16:09:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:31:56.633 16:09:00 -- common/autotest_common.sh@857 -- # local i 00:31:56.633 16:09:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:56.633 16:09:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:56.633 16:09:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:31:56.633 16:09:00 -- common/autotest_common.sh@861 -- # break 00:31:56.633 16:09:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:56.633 16:09:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:56.633 16:09:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.633 1+0 records in 00:31:56.633 1+0 records out 00:31:56.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338236 s, 12.1 MB/s 00:31:56.633 16:09:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.633 16:09:00 -- common/autotest_common.sh@874 -- # size=4096 00:31:56.633 16:09:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.633 16:09:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:56.633 16:09:00 -- common/autotest_common.sh@877 -- # return 0 00:31:56.633 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:56.633 16:09:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:56.633 16:09:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:31:56.891 /dev/nbd10 00:31:56.891 16:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:31:56.891 16:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:31:56.891 16:09:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:31:56.891 16:09:01 -- common/autotest_common.sh@857 -- # local i 00:31:56.891 16:09:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:56.891 16:09:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:56.891 16:09:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:31:56.891 16:09:01 -- common/autotest_common.sh@861 -- # break 00:31:56.891 16:09:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:56.891 16:09:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:56.891 16:09:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.891 1+0 records in 00:31:56.891 1+0 records out 00:31:56.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452223 s, 9.1 MB/s 00:31:56.891 16:09:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.891 16:09:01 -- common/autotest_common.sh@874 -- # size=4096 00:31:56.891 16:09:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.891 16:09:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:56.891 16:09:01 -- common/autotest_common.sh@877 -- # return 0 00:31:56.891 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:56.891 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:56.891 16:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:31:57.150 /dev/nbd11 00:31:57.150 16:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:31:57.150 16:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:31:57.150 16:09:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:31:57.150 16:09:01 -- common/autotest_common.sh@857 -- # local i 00:31:57.150 16:09:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:57.150 16:09:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:57.150 16:09:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:31:57.150 16:09:01 -- common/autotest_common.sh@861 -- # break 00:31:57.150 16:09:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:57.150 16:09:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:57.150 16:09:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:57.150 1+0 records in 00:31:57.150 1+0 records out 00:31:57.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474718 s, 8.6 MB/s 00:31:57.150 16:09:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.150 16:09:01 -- common/autotest_common.sh@874 -- # size=4096 00:31:57.150 16:09:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.150 16:09:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:57.150 16:09:01 -- common/autotest_common.sh@877 -- # return 0 00:31:57.150 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:57.150 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:57.150 16:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:31:57.408 /dev/nbd12 00:31:57.408 16:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:31:57.408 16:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:31:57.408 16:09:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:31:57.408 16:09:01 -- common/autotest_common.sh@857 -- # local i 00:31:57.408 16:09:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:57.408 16:09:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:57.408 16:09:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:31:57.408 16:09:01 -- common/autotest_common.sh@861 -- # break 00:31:57.408 16:09:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:57.408 16:09:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:57.408 16:09:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:57.408 1+0 records in 00:31:57.408 1+0 records out 00:31:57.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357754 s, 11.4 MB/s 00:31:57.408 16:09:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.408 16:09:01 -- common/autotest_common.sh@874 -- # size=4096 00:31:57.408 16:09:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.408 16:09:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:57.408 16:09:01 -- common/autotest_common.sh@877 -- # return 0 00:31:57.408 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:57.408 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:57.408 16:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:31:57.667 /dev/nbd13 00:31:57.667 16:09:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:31:57.667 16:09:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:31:57.667 16:09:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:31:57.667 16:09:01 -- common/autotest_common.sh@857 -- # local i 00:31:57.667 16:09:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:57.667 16:09:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:57.667 16:09:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:31:57.667 16:09:01 -- common/autotest_common.sh@861 -- # break 00:31:57.667 16:09:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:57.667 16:09:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:57.667 16:09:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:57.667 1+0 records in 00:31:57.667 1+0 records out 00:31:57.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460462 s, 8.9 MB/s 00:31:57.667 16:09:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.667 16:09:01 -- common/autotest_common.sh@874 -- # size=4096 00:31:57.667 16:09:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.667 16:09:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:57.667 16:09:01 -- common/autotest_common.sh@877 -- # return 0 00:31:57.667 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:57.667 16:09:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:57.667 16:09:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:31:57.949 /dev/nbd14 00:31:57.949 16:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:31:57.949 16:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:31:57.949 16:09:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:31:57.949 16:09:02 -- common/autotest_common.sh@857 -- # local i 00:31:57.949 16:09:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:57.949 16:09:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:57.949 16:09:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:31:57.949 16:09:02 -- common/autotest_common.sh@861 -- # break 00:31:57.949 16:09:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:57.949 16:09:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:57.949 16:09:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:57.949 1+0 records in 00:31:57.949 1+0 records out 00:31:57.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382536 s, 10.7 MB/s 00:31:57.949 16:09:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.949 16:09:02 -- common/autotest_common.sh@874 -- # size=4096 00:31:57.949 16:09:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:57.949 16:09:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:57.949 16:09:02 -- common/autotest_common.sh@877 -- # return 0 00:31:57.949 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:57.949 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:57.949 16:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:31:58.207 /dev/nbd15 00:31:58.207 16:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:31:58.207 16:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:31:58.207 16:09:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:31:58.207 16:09:02 -- common/autotest_common.sh@857 -- # local i 00:31:58.207 16:09:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:58.207 16:09:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:58.207 16:09:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:31:58.207 16:09:02 -- common/autotest_common.sh@861 -- # break 00:31:58.207 16:09:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:58.207 16:09:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:58.207 16:09:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:58.465 1+0 records in 00:31:58.465 1+0 records out 00:31:58.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128656 s, 3.2 MB/s 00:31:58.465 16:09:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.465 16:09:02 -- common/autotest_common.sh@874 -- # size=4096 00:31:58.465 16:09:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.465 16:09:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:58.465 16:09:02 -- common/autotest_common.sh@877 -- # return 0 00:31:58.465 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:58.465 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:58.465 16:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:31:58.722 /dev/nbd2 00:31:58.722 16:09:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:31:58.722 16:09:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:31:58.722 16:09:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:31:58.722 16:09:02 -- common/autotest_common.sh@857 -- # local i 00:31:58.722 16:09:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:58.722 16:09:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:58.722 16:09:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:31:58.722 16:09:02 -- common/autotest_common.sh@861 -- # break 00:31:58.722 16:09:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:58.722 16:09:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:58.722 16:09:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:58.722 1+0 records in 00:31:58.722 1+0 records out 00:31:58.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619789 s, 6.6 MB/s 00:31:58.722 16:09:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.722 16:09:02 -- common/autotest_common.sh@874 -- # size=4096 00:31:58.722 16:09:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.722 16:09:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:58.722 16:09:02 -- common/autotest_common.sh@877 -- # return 0 00:31:58.722 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:58.722 16:09:02 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:58.722 16:09:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:31:58.979 /dev/nbd3 00:31:58.979 16:09:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:31:58.979 16:09:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:31:58.979 16:09:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:31:58.979 16:09:03 -- common/autotest_common.sh@857 -- # local i 00:31:58.979 16:09:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:58.980 16:09:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:58.980 16:09:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:31:58.980 16:09:03 -- common/autotest_common.sh@861 -- # break 00:31:58.980 16:09:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:58.980 16:09:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:58.980 16:09:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:58.980 1+0 records in 00:31:58.980 1+0 records out 00:31:58.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717827 s, 5.7 MB/s 00:31:58.980 16:09:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.980 16:09:03 -- common/autotest_common.sh@874 -- # size=4096 00:31:58.980 16:09:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:58.980 16:09:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:58.980 16:09:03 -- common/autotest_common.sh@877 -- # return 0 00:31:58.980 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:58.980 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:58.980 16:09:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:31:59.237 /dev/nbd4 00:31:59.237 16:09:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:31:59.237 16:09:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:31:59.237 16:09:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:31:59.237 16:09:03 -- common/autotest_common.sh@857 -- # local i 00:31:59.237 16:09:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:59.237 16:09:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:59.237 16:09:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:31:59.237 16:09:03 -- common/autotest_common.sh@861 -- # break 00:31:59.237 16:09:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:59.237 16:09:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:59.237 16:09:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.237 1+0 records in 00:31:59.237 1+0 records out 00:31:59.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084075 s, 4.9 MB/s 00:31:59.237 16:09:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.237 16:09:03 -- common/autotest_common.sh@874 -- # size=4096 00:31:59.237 16:09:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.237 16:09:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:59.237 16:09:03 -- common/autotest_common.sh@877 -- # return 0 00:31:59.237 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.237 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:59.237 16:09:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:31:59.237 /dev/nbd5 00:31:59.495 16:09:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:31:59.496 16:09:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:31:59.496 16:09:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:31:59.496 16:09:03 -- common/autotest_common.sh@857 -- # local i 00:31:59.496 16:09:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:59.496 16:09:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:59.496 16:09:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:31:59.496 16:09:03 -- common/autotest_common.sh@861 -- # break 00:31:59.496 16:09:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:59.496 16:09:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:59.496 16:09:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.496 1+0 records in 00:31:59.496 1+0 records out 00:31:59.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785407 s, 5.2 MB/s 00:31:59.496 16:09:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.496 16:09:03 -- common/autotest_common.sh@874 -- # size=4096 00:31:59.496 16:09:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.496 16:09:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:59.496 16:09:03 -- common/autotest_common.sh@877 -- # return 0 00:31:59.496 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.496 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:59.496 16:09:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:31:59.754 /dev/nbd6 00:31:59.754 16:09:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:31:59.754 16:09:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:31:59.754 16:09:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:31:59.754 16:09:03 -- common/autotest_common.sh@857 -- # local i 00:31:59.754 16:09:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:59.754 16:09:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:59.754 16:09:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:31:59.754 16:09:03 -- common/autotest_common.sh@861 -- # break 00:31:59.754 16:09:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:59.754 16:09:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:59.754 16:09:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:59.754 1+0 records in 00:31:59.754 1+0 records out 00:31:59.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117009 s, 3.5 MB/s 00:31:59.754 16:09:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.754 16:09:03 -- common/autotest_common.sh@874 -- # size=4096 00:31:59.754 16:09:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:59.754 16:09:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:59.754 16:09:03 -- common/autotest_common.sh@877 -- # return 0 00:31:59.754 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:59.754 16:09:03 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:31:59.754 16:09:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:32:00.012 /dev/nbd7 00:32:00.012 16:09:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:32:00.012 16:09:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:32:00.012 16:09:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:32:00.012 16:09:04 -- common/autotest_common.sh@857 -- # local i 00:32:00.012 16:09:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:00.012 16:09:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:00.012 16:09:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:32:00.012 16:09:04 -- common/autotest_common.sh@861 -- # break 00:32:00.012 16:09:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:00.012 16:09:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:00.012 16:09:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:00.012 1+0 records in 00:32:00.012 1+0 records out 00:32:00.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645368 s, 6.3 MB/s 00:32:00.012 16:09:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.012 16:09:04 -- common/autotest_common.sh@874 -- # size=4096 00:32:00.012 16:09:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.012 16:09:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:00.012 16:09:04 -- common/autotest_common.sh@877 -- # return 0 00:32:00.012 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:00.012 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:32:00.012 16:09:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:32:00.270 /dev/nbd8 00:32:00.270 16:09:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:32:00.270 16:09:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:32:00.270 16:09:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:32:00.270 16:09:04 -- common/autotest_common.sh@857 -- # local i 00:32:00.270 16:09:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:00.270 16:09:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:00.270 16:09:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:32:00.270 16:09:04 -- common/autotest_common.sh@861 -- # break 00:32:00.270 16:09:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:00.270 16:09:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:00.270 16:09:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:00.270 1+0 records in 00:32:00.271 1+0 records out 00:32:00.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062871 s, 6.5 MB/s 00:32:00.271 16:09:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.271 16:09:04 -- common/autotest_common.sh@874 -- # size=4096 00:32:00.271 16:09:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.271 16:09:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:00.271 16:09:04 -- common/autotest_common.sh@877 -- # return 0 00:32:00.271 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:00.271 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:32:00.271 16:09:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:32:00.562 /dev/nbd9 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:32:00.562 16:09:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:32:00.562 16:09:04 -- common/autotest_common.sh@857 -- # local i 00:32:00.562 16:09:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:32:00.562 16:09:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:32:00.562 16:09:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:32:00.562 16:09:04 -- common/autotest_common.sh@861 -- # break 00:32:00.562 16:09:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:32:00.562 16:09:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:32:00.562 16:09:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:00.562 1+0 records in 00:32:00.562 1+0 records out 00:32:00.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100119 s, 4.1 MB/s 00:32:00.562 16:09:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.562 16:09:04 -- common/autotest_common.sh@874 -- # size=4096 00:32:00.562 16:09:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:00.562 16:09:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:32:00.562 16:09:04 -- common/autotest_common.sh@877 -- # return 0 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.562 16:09:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:00.820 16:09:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:00.820 { 00:32:00.820 "nbd_device": "/dev/nbd0", 00:32:00.820 "bdev_name": "Malloc0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd1", 00:32:00.821 "bdev_name": "Malloc1p0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd10", 00:32:00.821 "bdev_name": "Malloc1p1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd11", 00:32:00.821 "bdev_name": "Malloc2p0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd12", 00:32:00.821 "bdev_name": "Malloc2p1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd13", 00:32:00.821 "bdev_name": "Malloc2p2" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd14", 00:32:00.821 "bdev_name": "Malloc2p3" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd15", 00:32:00.821 "bdev_name": "Malloc2p4" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd2", 00:32:00.821 "bdev_name": "Malloc2p5" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd3", 00:32:00.821 "bdev_name": "Malloc2p6" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd4", 00:32:00.821 "bdev_name": "Malloc2p7" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd5", 00:32:00.821 "bdev_name": "TestPT" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd6", 00:32:00.821 "bdev_name": "raid0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd7", 00:32:00.821 "bdev_name": "concat0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd8", 00:32:00.821 "bdev_name": "raid1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd9", 00:32:00.821 "bdev_name": "AIO0" 00:32:00.821 } 00:32:00.821 ]' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd0", 00:32:00.821 "bdev_name": "Malloc0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd1", 00:32:00.821 "bdev_name": "Malloc1p0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd10", 00:32:00.821 "bdev_name": "Malloc1p1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd11", 00:32:00.821 "bdev_name": "Malloc2p0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd12", 00:32:00.821 "bdev_name": "Malloc2p1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd13", 00:32:00.821 "bdev_name": "Malloc2p2" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd14", 00:32:00.821 "bdev_name": "Malloc2p3" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd15", 00:32:00.821 "bdev_name": "Malloc2p4" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd2", 00:32:00.821 "bdev_name": "Malloc2p5" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd3", 00:32:00.821 "bdev_name": "Malloc2p6" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd4", 00:32:00.821 "bdev_name": "Malloc2p7" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd5", 00:32:00.821 "bdev_name": "TestPT" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd6", 00:32:00.821 "bdev_name": "raid0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd7", 00:32:00.821 "bdev_name": "concat0" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd8", 00:32:00.821 "bdev_name": "raid1" 00:32:00.821 }, 00:32:00.821 { 00:32:00.821 "nbd_device": "/dev/nbd9", 00:32:00.821 "bdev_name": "AIO0" 00:32:00.821 } 00:32:00.821 ]' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:00.821 /dev/nbd1 00:32:00.821 /dev/nbd10 00:32:00.821 /dev/nbd11 00:32:00.821 /dev/nbd12 00:32:00.821 /dev/nbd13 00:32:00.821 /dev/nbd14 00:32:00.821 /dev/nbd15 00:32:00.821 /dev/nbd2 00:32:00.821 /dev/nbd3 00:32:00.821 /dev/nbd4 00:32:00.821 /dev/nbd5 00:32:00.821 /dev/nbd6 00:32:00.821 /dev/nbd7 00:32:00.821 /dev/nbd8 00:32:00.821 /dev/nbd9' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:00.821 /dev/nbd1 00:32:00.821 /dev/nbd10 00:32:00.821 /dev/nbd11 00:32:00.821 /dev/nbd12 00:32:00.821 /dev/nbd13 00:32:00.821 /dev/nbd14 00:32:00.821 /dev/nbd15 00:32:00.821 /dev/nbd2 00:32:00.821 /dev/nbd3 00:32:00.821 /dev/nbd4 00:32:00.821 /dev/nbd5 00:32:00.821 /dev/nbd6 00:32:00.821 /dev/nbd7 00:32:00.821 /dev/nbd8 00:32:00.821 /dev/nbd9' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@65 -- # count=16 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@66 -- # echo 16 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@95 -- # count=16 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:00.821 256+0 records in 00:32:00.821 256+0 records out 00:32:00.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782092 s, 134 MB/s 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:00.821 16:09:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:01.080 256+0 records in 00:32:01.080 256+0 records out 00:32:01.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150573 s, 7.0 MB/s 00:32:01.080 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.080 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:01.080 256+0 records in 00:32:01.080 256+0 records out 00:32:01.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15395 s, 6.8 MB/s 00:32:01.080 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.080 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:32:01.338 256+0 records in 00:32:01.338 256+0 records out 00:32:01.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155049 s, 6.8 MB/s 00:32:01.338 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.338 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:32:01.596 256+0 records in 00:32:01.596 256+0 records out 00:32:01.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15449 s, 6.8 MB/s 00:32:01.596 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.596 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:32:01.596 256+0 records in 00:32:01.596 256+0 records out 00:32:01.596 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15369 s, 6.8 MB/s 00:32:01.596 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.596 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:32:01.854 256+0 records in 00:32:01.854 256+0 records out 00:32:01.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154747 s, 6.8 MB/s 00:32:01.854 16:09:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.854 16:09:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:32:01.854 256+0 records in 00:32:01.854 256+0 records out 00:32:01.854 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157331 s, 6.7 MB/s 00:32:01.854 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:01.854 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:32:02.113 256+0 records in 00:32:02.113 256+0 records out 00:32:02.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155231 s, 6.8 MB/s 00:32:02.113 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.113 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:32:02.370 256+0 records in 00:32:02.370 256+0 records out 00:32:02.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15433 s, 6.8 MB/s 00:32:02.370 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.370 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:32:02.370 256+0 records in 00:32:02.370 256+0 records out 00:32:02.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153019 s, 6.9 MB/s 00:32:02.370 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.370 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:32:02.627 256+0 records in 00:32:02.627 256+0 records out 00:32:02.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153814 s, 6.8 MB/s 00:32:02.627 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.627 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:32:02.627 256+0 records in 00:32:02.627 256+0 records out 00:32:02.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154997 s, 6.8 MB/s 00:32:02.627 16:09:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.627 16:09:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:32:02.885 256+0 records in 00:32:02.885 256+0 records out 00:32:02.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155101 s, 6.8 MB/s 00:32:02.885 16:09:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:02.885 16:09:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:32:03.154 256+0 records in 00:32:03.154 256+0 records out 00:32:03.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158215 s, 6.6 MB/s 00:32:03.154 16:09:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:03.154 16:09:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:32:03.154 256+0 records in 00:32:03.154 256+0 records out 00:32:03.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162053 s, 6.5 MB/s 00:32:03.154 16:09:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:03.154 16:09:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:32:03.412 256+0 records in 00:32:03.412 256+0 records out 00:32:03.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.232931 s, 4.5 MB/s 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.412 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@51 -- # local i 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.671 16:09:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@41 -- # break 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:03.929 16:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@41 -- # break 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.187 16:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@41 -- # break 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:32:04.444 16:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.445 16:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@41 -- # break 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@45 -- # return 0 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.703 16:09:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@41 -- # break 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.961 16:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@41 -- # break 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.219 16:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@41 -- # break 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.477 16:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@41 -- # break 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.771 16:09:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@41 -- # break 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.030 16:09:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@41 -- # break 00:32:06.287 16:09:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.288 16:09:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.288 16:09:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@41 -- # break 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.546 16:09:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@41 -- # break 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.804 16:09:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@41 -- # break 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.062 16:09:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@41 -- # break 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.320 16:09:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.577 16:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:32:07.578 16:09:11 -- bdev/nbd_common.sh@41 -- # break 00:32:07.578 16:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.578 16:09:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.578 16:09:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@41 -- # break 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:07.835 16:09:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@65 -- # true 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@65 -- # count=0 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@104 -- # count=0 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@109 -- # return 0 00:32:08.092 16:09:12 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:08.092 16:09:12 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:08.350 malloc_lvol_verify 00:32:08.350 16:09:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:08.607 fb76b668-3a10-4722-b2cb-83710ef965df 00:32:08.607 16:09:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:08.865 7d645eea-e0f5-46d4-942f-cdaeb1db794f 00:32:08.865 16:09:13 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:09.123 /dev/nbd0 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:09.123 mke2fs 1.47.0 (5-Feb-2023) 00:32:09.123 00:32:09.123 Filesystem too small for a journal 00:32:09.123 Discarding device blocks: 0/1024 done 00:32:09.123 Creating filesystem with 1024 4k blocks and 1024 inodes 00:32:09.123 00:32:09.123 Allocating group tables: 0/1 done 00:32:09.123 Writing inode tables: 0/1 done 00:32:09.123 Writing superblocks and filesystem accounting information: 0/1 done 00:32:09.123 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@51 -- # local i 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:09.123 16:09:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@41 -- # break 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@45 -- # return 0 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:09.382 16:09:13 -- bdev/nbd_common.sh@147 -- # return 0 00:32:09.382 16:09:13 -- bdev/blockdev.sh@324 -- # killprocess 66668 00:32:09.382 16:09:13 -- common/autotest_common.sh@926 -- # '[' -z 66668 ']' 00:32:09.382 16:09:13 -- common/autotest_common.sh@930 -- # kill -0 66668 00:32:09.382 16:09:13 -- common/autotest_common.sh@931 -- # uname 00:32:09.382 16:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:09.382 16:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66668 00:32:09.382 16:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:09.382 killing process with pid 66668 00:32:09.382 16:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:09.382 16:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66668' 00:32:09.382 16:09:13 -- common/autotest_common.sh@945 -- # kill 66668 00:32:09.382 16:09:13 -- common/autotest_common.sh@950 -- # wait 66668 00:32:11.934 16:09:15 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:32:11.934 00:32:11.934 real 0m27.070s 00:32:11.934 user 0m37.491s 00:32:11.934 sys 0m9.701s 00:32:11.934 16:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.934 16:09:15 -- common/autotest_common.sh@10 -- # set +x 00:32:11.934 ************************************ 00:32:11.934 END TEST bdev_nbd 00:32:11.934 ************************************ 00:32:11.934 16:09:16 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:32:11.934 16:09:16 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.934 16:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:11.934 ************************************ 00:32:11.934 START TEST bdev_fio 00:32:11.934 ************************************ 00:32:11.934 16:09:16 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@329 -- # local env_context 00:32:11.934 16:09:16 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:11.934 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:11.934 16:09:16 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:11.934 16:09:16 -- bdev/blockdev.sh@337 -- # echo '' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:32:11.934 16:09:16 -- bdev/blockdev.sh@337 -- # env_context= 00:32:11.934 16:09:16 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:11.934 16:09:16 -- common/autotest_common.sh@1260 -- # local workload=verify 00:32:11.934 16:09:16 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:32:11.934 16:09:16 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:11.934 16:09:16 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:11.934 16:09:16 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:11.934 16:09:16 -- common/autotest_common.sh@1280 -- # cat 00:32:11.934 16:09:16 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1293 -- # cat 00:32:11.934 16:09:16 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:32:11.934 16:09:16 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:32:11.934 16:09:16 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:11.934 16:09:16 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.934 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.934 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.934 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.934 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.934 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:32:11.934 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:32:11.934 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:32:11.935 16:09:16 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:32:11.935 16:09:16 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:32:11.935 16:09:16 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:11.935 16:09:16 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:11.935 16:09:16 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:11.935 16:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.935 16:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:11.935 ************************************ 00:32:11.935 START TEST bdev_fio_rw_verify 00:32:11.935 ************************************ 00:32:11.935 16:09:16 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:11.935 16:09:16 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:11.935 16:09:16 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:11.935 16:09:16 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.935 16:09:16 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:11.935 16:09:16 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:11.935 16:09:16 -- common/autotest_common.sh@1320 -- # shift 00:32:11.935 16:09:16 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:11.935 16:09:16 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.935 16:09:16 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:11.935 16:09:16 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:11.935 16:09:16 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:11.935 16:09:16 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:32:11.935 16:09:16 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:32:11.935 16:09:16 -- common/autotest_common.sh@1326 -- # break 00:32:11.935 16:09:16 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:11.935 16:09:16 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:12.194 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:12.194 fio-3.35 00:32:12.194 Starting 16 threads 00:32:24.391 00:32:24.391 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=67818: Mon Jul 22 16:09:27 2024 00:32:24.391 read: IOPS=76.8k, BW=300MiB/s (315MB/s)(3002MiB/10001msec) 00:32:24.391 slat (usec): min=2, max=14045, avg=38.95, stdev=253.96 00:32:24.391 clat (usec): min=9, max=14296, avg=295.43, stdev=696.06 00:32:24.391 lat (usec): min=27, max=14317, avg=334.38, stdev=739.33 00:32:24.391 clat percentiles (usec): 00:32:24.391 | 50.000th=[ 178], 99.000th=[ 4228], 99.900th=[ 7242], 99.990th=[10421], 00:32:24.391 | 99.999th=[13173] 00:32:24.391 write: IOPS=122k, BW=476MiB/s (499MB/s)(4711MiB/9907msec); 0 zone resets 00:32:24.391 slat (usec): min=6, max=17058, avg=63.60, stdev=326.02 00:32:24.391 clat (usec): min=10, max=17313, avg=375.00, stdev=797.33 00:32:24.391 lat (usec): min=36, max=17334, avg=438.60, stdev=858.68 00:32:24.391 clat percentiles (usec): 00:32:24.391 | 50.000th=[ 225], 99.000th=[ 4359], 99.900th=[ 7439], 99.990th=[10814], 00:32:24.391 | 99.999th=[17171] 00:32:24.391 bw ( KiB/s): min=312632, max=744578, per=99.21%, avg=483132.47, stdev=7908.86, samples=304 00:32:24.391 iops : min=78158, max=186144, avg=120782.89, stdev=1977.21, samples=304 00:32:24.391 lat (usec) : 10=0.01%, 20=0.01%, 50=0.69%, 100=12.48%, 250=53.50% 00:32:24.391 lat (usec) : 500=28.13%, 750=1.61%, 1000=0.19% 00:32:24.391 lat (msec) : 2=0.18%, 4=1.16%, 10=2.01%, 20=0.02% 00:32:24.391 cpu : usr=57.83%, sys=2.34%, ctx=240253, majf=0, minf=100829 00:32:24.391 IO depths : 1=11.1%, 2=23.8%, 4=52.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.391 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.391 issued rwts: total=768473,1206069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.391 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:24.391 00:32:24.391 Run status group 0 (all jobs): 00:32:24.391 READ: bw=300MiB/s (315MB/s), 300MiB/s-300MiB/s (315MB/s-315MB/s), io=3002MiB (3148MB), run=10001-10001msec 00:32:24.391 WRITE: bw=476MiB/s (499MB/s), 476MiB/s-476MiB/s (499MB/s-499MB/s), io=4711MiB (4940MB), run=9907-9907msec 00:32:26.292 ----------------------------------------------------- 00:32:26.292 Suppressions used: 00:32:26.292 count bytes template 00:32:26.292 16 140 /usr/src/fio/parse.c 00:32:26.292 14368 1379328 /usr/src/fio/iolog.c 00:32:26.292 1 904 libcrypto.so 00:32:26.292 ----------------------------------------------------- 00:32:26.292 00:32:26.292 00:32:26.292 real 0m14.352s 00:32:26.292 user 1m37.588s 00:32:26.292 sys 0m4.799s 00:32:26.292 16:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.292 ************************************ 00:32:26.292 16:09:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.292 END TEST bdev_fio_rw_verify 00:32:26.292 ************************************ 00:32:26.292 16:09:30 -- bdev/blockdev.sh@348 -- # rm -f 00:32:26.292 16:09:30 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:26.292 16:09:30 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:26.292 16:09:30 -- common/autotest_common.sh@1260 -- # local workload=trim 00:32:26.292 16:09:30 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:32:26.292 16:09:30 -- common/autotest_common.sh@1262 -- # local env_context= 00:32:26.292 16:09:30 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:32:26.292 16:09:30 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:26.292 16:09:30 -- common/autotest_common.sh@1280 -- # cat 00:32:26.292 16:09:30 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:32:26.292 16:09:30 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:32:26.292 16:09:30 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:26.293 16:09:30 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5095685b-eef3-5e98-928b-283e678e9700"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5095685b-eef3-5e98-928b-283e678e9700",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "84e0987f-0e31-513a-b8c6-b8407aafb8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "84e0987f-0e31-513a-b8c6-b8407aafb8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2a5853b-eeae-5cd2-8b5e-77debd17fc87"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2a5853b-eeae-5cd2-8b5e-77debd17fc87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a527e21e-f2eb-5894-82c4-7cdde71731a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a527e21e-f2eb-5894-82c4-7cdde71731a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "22de300d-0a34-5792-8b10-596203bfca03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "22de300d-0a34-5792-8b10-596203bfca03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "48174eb4-79ff-563c-9779-382754ac47d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48174eb4-79ff-563c-9779-382754ac47d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8b4944d7-8f42-5d37-a225-80d890d59c1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b4944d7-8f42-5d37-a225-80d890d59c1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0fb3ea35-c2ae-5376-bd11-27b464d998b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fb3ea35-c2ae-5376-bd11-27b464d998b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "57b8d027-60e3-5469-8442-31c658f4e72d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57b8d027-60e3-5469-8442-31c658f4e72d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a0472dad-482b-5d2d-900e-55316b26d197"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0472dad-482b-5d2d-900e-55316b26d197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "05d9c3cc-2461-4cd8-953f-d0b09f4cc743"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b947b01b-6074-4bdc-9142-b5967846664c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5236a70f-e527-4a54-9bf5-92ce3e6fd356",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "789eff3a-9e39-41b9-aa23-e890ee257d85"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e465cb4-55ce-4ba9-92f4-52950eab0751",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "206112d9-bd20-435b-bd41-0a2881fefdd9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "93def095-d54c-46f9-93c6-ec09849d919d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4cc9bf7f-222d-4142-a11e-19a69951bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9355d2f3-492b-47d7-b9ca-21188519c7d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2b15f40a-3a0d-4da2-9159-b889a55625b3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2b15f40a-3a0d-4da2-9159-b889a55625b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:32:26.293 16:09:30 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:32:26.293 Malloc1p0 00:32:26.293 Malloc1p1 00:32:26.293 Malloc2p0 00:32:26.293 Malloc2p1 00:32:26.293 Malloc2p2 00:32:26.293 Malloc2p3 00:32:26.293 Malloc2p4 00:32:26.293 Malloc2p5 00:32:26.293 Malloc2p6 00:32:26.293 Malloc2p7 00:32:26.293 TestPT 00:32:26.293 raid0 00:32:26.293 concat0 ]] 00:32:26.293 16:09:30 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:26.295 16:09:30 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "6a2f14b8-0268-4a2a-8a3f-7722ddabd0e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5095685b-eef3-5e98-928b-283e678e9700"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5095685b-eef3-5e98-928b-283e678e9700",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "84e0987f-0e31-513a-b8c6-b8407aafb8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "84e0987f-0e31-513a-b8c6-b8407aafb8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b2a5853b-eeae-5cd2-8b5e-77debd17fc87"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b2a5853b-eeae-5cd2-8b5e-77debd17fc87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a527e21e-f2eb-5894-82c4-7cdde71731a0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a527e21e-f2eb-5894-82c4-7cdde71731a0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "22de300d-0a34-5792-8b10-596203bfca03"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "22de300d-0a34-5792-8b10-596203bfca03",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "48174eb4-79ff-563c-9779-382754ac47d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "48174eb4-79ff-563c-9779-382754ac47d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3b97a27a-62ee-5c85-8ab2-34e337cdc6ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8b4944d7-8f42-5d37-a225-80d890d59c1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8b4944d7-8f42-5d37-a225-80d890d59c1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0fb3ea35-c2ae-5376-bd11-27b464d998b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0fb3ea35-c2ae-5376-bd11-27b464d998b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "57b8d027-60e3-5469-8442-31c658f4e72d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "57b8d027-60e3-5469-8442-31c658f4e72d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a0472dad-482b-5d2d-900e-55316b26d197"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0472dad-482b-5d2d-900e-55316b26d197",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "05d9c3cc-2461-4cd8-953f-d0b09f4cc743"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "05d9c3cc-2461-4cd8-953f-d0b09f4cc743",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b947b01b-6074-4bdc-9142-b5967846664c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "5236a70f-e527-4a54-9bf5-92ce3e6fd356",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "789eff3a-9e39-41b9-aa23-e890ee257d85"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "789eff3a-9e39-41b9-aa23-e890ee257d85",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e465cb4-55ce-4ba9-92f4-52950eab0751",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "206112d9-bd20-435b-bd41-0a2881fefdd9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "93def095-d54c-46f9-93c6-ec09849d919d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "93def095-d54c-46f9-93c6-ec09849d919d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "4cc9bf7f-222d-4142-a11e-19a69951bd63",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "9355d2f3-492b-47d7-b9ca-21188519c7d5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2b15f40a-3a0d-4da2-9159-b889a55625b3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2b15f40a-3a0d-4da2-9159-b889a55625b3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:32:26.553 16:09:30 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:26.553 16:09:30 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:32:26.553 16:09:30 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:32:26.553 16:09:30 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:26.553 16:09:30 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:32:26.553 16:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:26.553 16:09:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.553 ************************************ 00:32:26.553 START TEST bdev_fio_trim 00:32:26.553 ************************************ 00:32:26.553 16:09:30 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:26.553 16:09:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:26.553 16:09:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:26.553 16:09:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:26.553 16:09:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:26.553 16:09:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:26.553 16:09:30 -- common/autotest_common.sh@1320 -- # shift 00:32:26.553 16:09:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:26.553 16:09:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:26.553 16:09:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:26.553 16:09:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:26.553 16:09:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:26.553 16:09:30 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:32:26.553 16:09:30 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:32:26.553 16:09:30 -- common/autotest_common.sh@1326 -- # break 00:32:26.553 16:09:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:26.553 16:09:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:26.553 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.553 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:26.554 fio-3.35 00:32:26.554 Starting 14 threads 00:32:38.750 00:32:38.750 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=68023: Mon Jul 22 16:09:42 2024 00:32:38.750 write: IOPS=135k, BW=527MiB/s (553MB/s)(5274MiB/10001msec); 0 zone resets 00:32:38.751 slat (usec): min=2, max=13042, avg=38.01, stdev=215.67 00:32:38.751 clat (usec): min=13, max=13207, avg=261.59, stdev=570.64 00:32:38.751 lat (usec): min=31, max=13253, avg=299.60, stdev=608.84 00:32:38.751 clat percentiles (usec): 00:32:38.751 | 50.000th=[ 174], 99.000th=[ 4146], 99.900th=[ 7111], 99.990th=[ 7832], 00:32:38.751 | 99.999th=[11207] 00:32:38.751 bw ( KiB/s): min=343464, max=831937, per=100.00%, avg=543259.68, stdev=10914.85, samples=266 00:32:38.751 iops : min=85866, max=207981, avg=135814.47, stdev=2728.66, samples=266 00:32:38.751 trim: IOPS=135k, BW=527MiB/s (553MB/s)(5274MiB/10001msec); 0 zone resets 00:32:38.751 slat (usec): min=4, max=13079, avg=25.03, stdev=175.60 00:32:38.751 clat (usec): min=4, max=13253, avg=283.43, stdev=592.29 00:32:38.751 lat (usec): min=13, max=13409, avg=308.46, stdev=617.35 00:32:38.751 clat percentiles (usec): 00:32:38.751 | 50.000th=[ 194], 99.000th=[ 4178], 99.900th=[ 7177], 99.990th=[ 8094], 00:32:38.751 | 99.999th=[11207] 00:32:38.751 bw ( KiB/s): min=343528, max=831929, per=100.00%, avg=543259.68, stdev=10914.41, samples=266 00:32:38.751 iops : min=85882, max=207981, avg=135814.47, stdev=2728.56, samples=266 00:32:38.751 lat (usec) : 10=0.08%, 20=0.28%, 50=1.30%, 100=9.64%, 250=64.24% 00:32:38.751 lat (usec) : 500=22.14%, 750=0.18%, 1000=0.04% 00:32:38.751 lat (msec) : 2=0.06%, 4=0.78%, 10=1.26%, 20=0.01% 00:32:38.751 cpu : usr=69.10%, sys=0.23%, ctx=143979, majf=0, minf=15748 00:32:38.751 IO depths : 1=12.3%, 2=24.5%, 4=50.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.751 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.751 issued rwts: total=0,1350243,1350244,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.751 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:38.751 00:32:38.751 Run status group 0 (all jobs): 00:32:38.751 WRITE: bw=527MiB/s (553MB/s), 527MiB/s-527MiB/s (553MB/s-553MB/s), io=5274MiB (5531MB), run=10001-10001msec 00:32:38.751 TRIM: bw=527MiB/s (553MB/s), 527MiB/s-527MiB/s (553MB/s-553MB/s), io=5274MiB (5531MB), run=10001-10001msec 00:32:40.649 ----------------------------------------------------- 00:32:40.649 Suppressions used: 00:32:40.649 count bytes template 00:32:40.649 14 129 /usr/src/fio/parse.c 00:32:40.649 1 904 libcrypto.so 00:32:40.649 ----------------------------------------------------- 00:32:40.649 00:32:40.649 00:32:40.649 real 0m14.118s 00:32:40.649 user 1m41.860s 00:32:40.649 sys 0m1.129s 00:32:40.649 16:09:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.649 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.649 ************************************ 00:32:40.649 END TEST bdev_fio_trim 00:32:40.649 ************************************ 00:32:40.649 16:09:44 -- bdev/blockdev.sh@366 -- # rm -f 00:32:40.649 16:09:44 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:40.649 /home/vagrant/spdk_repo/spdk 00:32:40.649 16:09:44 -- bdev/blockdev.sh@368 -- # popd 00:32:40.649 16:09:44 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:32:40.649 00:32:40.649 real 0m28.729s 00:32:40.649 user 3m19.547s 00:32:40.649 sys 0m6.068s 00:32:40.649 16:09:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.649 ************************************ 00:32:40.649 END TEST bdev_fio 00:32:40.649 ************************************ 00:32:40.649 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.649 16:09:44 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:40.649 16:09:44 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.649 16:09:44 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:40.649 16:09:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:40.649 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.649 ************************************ 00:32:40.649 START TEST bdev_verify 00:32:40.649 ************************************ 00:32:40.649 16:09:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:40.649 [2024-07-22 16:09:44.879212] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:40.649 [2024-07-22 16:09:44.879623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68200 ] 00:32:40.908 [2024-07-22 16:09:45.059457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:41.166 [2024-07-22 16:09:45.373982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.166 [2024-07-22 16:09:45.374011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.733 [2024-07-22 16:09:45.801964] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:41.733 [2024-07-22 16:09:45.802056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:41.733 [2024-07-22 16:09:45.809908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:41.733 [2024-07-22 16:09:45.809965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:41.733 [2024-07-22 16:09:45.817955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:32:41.733 [2024-07-22 16:09:45.818026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:41.733 [2024-07-22 16:09:45.818057] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:41.991 [2024-07-22 16:09:46.054530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:32:41.991 [2024-07-22 16:09:46.054902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:41.991 [2024-07-22 16:09:46.055018] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:32:41.991 [2024-07-22 16:09:46.055045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:41.991 [2024-07-22 16:09:46.058761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:41.991 [2024-07-22 16:09:46.058819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:32:42.249 Running I/O for 5 seconds... 00:32:47.515 00:32:47.515 Latency(us) 00:32:47.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.515 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x1000 00:32:47.515 Malloc0 : 5.21 1245.17 4.86 0.00 0.00 102071.44 2457.60 163005.91 00:32:47.515 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x1000 length 0x1000 00:32:47.515 Malloc0 : 5.23 1240.98 4.85 0.00 0.00 102456.09 2442.71 228780.22 00:32:47.515 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x800 00:32:47.515 Malloc1p0 : 5.21 859.06 3.36 0.00 0.00 147746.77 4944.99 151566.89 00:32:47.515 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x800 length 0x800 00:32:47.515 Malloc1p0 : 5.24 871.81 3.41 0.00 0.00 145691.13 4974.78 142987.64 00:32:47.515 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x800 00:32:47.515 Malloc1p1 : 5.22 858.78 3.35 0.00 0.00 147515.98 4796.04 144894.14 00:32:47.515 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x800 length 0x800 00:32:47.515 Malloc1p1 : 5.24 871.36 3.40 0.00 0.00 145450.14 4736.47 136314.88 00:32:47.515 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x200 00:32:47.515 Malloc2p0 : 5.22 858.50 3.35 0.00 0.00 147282.23 4587.52 140127.88 00:32:47.515 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x200 length 0x200 00:32:47.515 Malloc2p0 : 5.24 870.95 3.40 0.00 0.00 145262.30 4587.52 131548.63 00:32:47.515 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x200 00:32:47.515 Malloc2p1 : 5.22 858.21 3.35 0.00 0.00 147088.04 4230.05 136314.88 00:32:47.515 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x200 length 0x200 00:32:47.515 Malloc2p1 : 5.24 870.43 3.40 0.00 0.00 145096.92 4170.47 127735.62 00:32:47.515 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x200 00:32:47.515 Malloc2p2 : 5.22 857.93 3.35 0.00 0.00 146887.75 4706.68 131548.63 00:32:47.515 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x200 length 0x200 00:32:47.515 Malloc2p2 : 5.25 869.91 3.40 0.00 0.00 144929.61 4498.15 123922.62 00:32:47.515 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x200 00:32:47.515 Malloc2p3 : 5.22 857.66 3.35 0.00 0.00 146681.49 4527.94 126782.37 00:32:47.515 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x200 length 0x200 00:32:47.515 Malloc2p3 : 5.25 869.68 3.40 0.00 0.00 144751.01 4408.79 125829.12 00:32:47.515 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x0 length 0x200 00:32:47.515 Malloc2p4 : 5.22 857.38 3.35 0.00 0.00 146462.40 4527.94 122969.37 00:32:47.515 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.515 Verification LBA range: start 0x200 length 0x200 00:32:47.515 Malloc2p4 : 5.25 869.45 3.40 0.00 0.00 144569.96 4349.21 125829.12 00:32:47.774 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x200 00:32:47.774 Malloc2p5 : 5.23 857.12 3.35 0.00 0.00 146265.58 4676.89 117726.49 00:32:47.774 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x200 length 0x200 00:32:47.774 Malloc2p5 : 5.25 869.23 3.40 0.00 0.00 144355.70 4527.94 126782.37 00:32:47.774 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x200 00:32:47.774 Malloc2p6 : 5.23 856.85 3.35 0.00 0.00 146048.77 4408.79 114390.11 00:32:47.774 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x200 length 0x200 00:32:47.774 Malloc2p6 : 5.25 869.00 3.39 0.00 0.00 144161.99 4557.73 126782.37 00:32:47.774 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x200 00:32:47.774 Malloc2p7 : 5.23 856.57 3.35 0.00 0.00 145865.07 4557.73 112006.98 00:32:47.774 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x200 length 0x200 00:32:47.774 Malloc2p7 : 5.25 868.76 3.39 0.00 0.00 143949.09 4379.00 123922.62 00:32:47.774 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x1000 00:32:47.774 TestPT : 5.23 856.27 3.34 0.00 0.00 145613.41 4736.47 112483.61 00:32:47.774 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x1000 length 0x1000 00:32:47.774 TestPT : 5.25 839.62 3.28 0.00 0.00 148656.42 8400.52 177304.67 00:32:47.774 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x2000 00:32:47.774 raid0 : 5.23 855.74 3.34 0.00 0.00 145287.69 4527.94 113436.86 00:32:47.774 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x2000 length 0x2000 00:32:47.774 raid0 : 5.26 868.32 3.39 0.00 0.00 143440.91 4468.36 122969.37 00:32:47.774 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x2000 00:32:47.774 concat0 : 5.24 855.37 3.34 0.00 0.00 145108.12 4587.52 114390.11 00:32:47.774 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x2000 length 0x2000 00:32:47.774 concat0 : 5.26 868.10 3.39 0.00 0.00 143226.51 4438.57 122969.37 00:32:47.774 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x1000 00:32:47.774 raid1 : 5.24 854.83 3.34 0.00 0.00 144931.54 4617.31 115819.99 00:32:47.774 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x1000 length 0x1000 00:32:47.774 raid1 : 5.26 867.83 3.39 0.00 0.00 143007.28 5123.72 122016.12 00:32:47.774 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x0 length 0x4e2 00:32:47.774 AIO0 : 5.24 853.80 3.34 0.00 0.00 144841.89 3723.64 118203.11 00:32:47.774 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:47.774 Verification LBA range: start 0x4e2 length 0x4e2 00:32:47.774 AIO0 : 5.26 867.63 3.39 0.00 0.00 142754.14 4379.00 121539.49 00:32:47.774 =================================================================================================================== 00:32:47.774 Total : 28352.29 110.75 0.00 0.00 141646.32 2442.71 228780.22 00:32:50.305 00:32:50.305 real 0m9.462s 00:32:50.305 user 0m16.911s 00:32:50.305 sys 0m0.751s 00:32:50.305 16:09:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.305 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.305 ************************************ 00:32:50.305 END TEST bdev_verify 00:32:50.305 ************************************ 00:32:50.305 16:09:54 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:50.305 16:09:54 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:50.305 16:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:50.305 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.305 ************************************ 00:32:50.305 START TEST bdev_verify_big_io 00:32:50.305 ************************************ 00:32:50.305 16:09:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:50.305 [2024-07-22 16:09:54.379757] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:50.305 [2024-07-22 16:09:54.379937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68315 ] 00:32:50.305 [2024-07-22 16:09:54.555770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:50.913 [2024-07-22 16:09:54.896166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.913 [2024-07-22 16:09:54.896170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.171 [2024-07-22 16:09:55.345876] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:51.171 [2024-07-22 16:09:55.346264] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:51.171 [2024-07-22 16:09:55.353801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:51.171 [2024-07-22 16:09:55.353979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:51.171 [2024-07-22 16:09:55.361851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:32:51.171 [2024-07-22 16:09:55.362089] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:51.171 [2024-07-22 16:09:55.362259] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:51.431 [2024-07-22 16:09:55.577036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:32:51.431 [2024-07-22 16:09:55.577502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.431 [2024-07-22 16:09:55.577699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:32:51.431 [2024-07-22 16:09:55.577865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.431 [2024-07-22 16:09:55.581227] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.431 [2024-07-22 16:09:55.581273] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:32:51.689 [2024-07-22 16:09:55.936226] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.939749] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.943982] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.948142] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.952039] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.956551] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:32:51.689 [2024-07-22 16:09:55.960336] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.964869] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.968841] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.973719] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.977940] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.982719] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.986965] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.991597] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:55.996544] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:56.000307] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:32:51.948 [2024-07-22 16:09:56.095983] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:32:51.948 [2024-07-22 16:09:56.103893] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:32:51.948 Running I/O for 5 seconds... 00:32:58.504 00:32:58.504 Latency(us) 00:32:58.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.504 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x100 00:32:58.504 Malloc0 : 5.67 263.67 16.48 0.00 0.00 468279.81 30146.56 1441315.37 00:32:58.504 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x100 length 0x100 00:32:58.504 Malloc0 : 5.76 260.47 16.28 0.00 0.00 472410.84 28478.37 1708225.63 00:32:58.504 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x80 00:32:58.504 Malloc1p0 : 5.78 204.13 12.76 0.00 0.00 588936.24 53143.74 1304047.24 00:32:58.504 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x80 length 0x80 00:32:58.504 Malloc1p0 : 5.86 173.59 10.85 0.00 0.00 697235.27 52905.43 1197283.14 00:32:58.504 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x80 00:32:58.504 Malloc1p1 : 5.96 100.48 6.28 0.00 0.00 1172922.53 51713.86 2516582.40 00:32:58.504 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x80 length 0x80 00:32:58.504 Malloc1p1 : 5.99 105.66 6.60 0.00 0.00 1119549.90 52905.43 2409818.30 00:32:58.504 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p0 : 5.78 56.76 3.55 0.00 0.00 522888.19 8400.52 835047.80 00:32:58.504 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p0 : 5.82 59.94 3.75 0.00 0.00 498376.27 8400.52 827421.79 00:32:58.504 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p1 : 5.78 56.75 3.55 0.00 0.00 520233.50 8340.95 819795.78 00:32:58.504 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p1 : 5.82 59.93 3.75 0.00 0.00 495940.74 8400.52 812169.77 00:32:58.504 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p2 : 5.78 56.74 3.55 0.00 0.00 517790.62 7745.16 800730.76 00:32:58.504 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p2 : 5.83 59.91 3.74 0.00 0.00 493465.17 7923.90 796917.76 00:32:58.504 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p3 : 5.78 56.72 3.55 0.00 0.00 515280.07 8221.79 781665.75 00:32:58.504 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p3 : 5.83 59.90 3.74 0.00 0.00 491117.21 8400.52 777852.74 00:32:58.504 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p4 : 5.78 56.71 3.54 0.00 0.00 512795.31 9175.04 762600.73 00:32:58.504 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p4 : 5.83 59.88 3.74 0.00 0.00 488676.21 8996.31 754974.72 00:32:58.504 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p5 : 5.78 56.70 3.54 0.00 0.00 509908.04 9413.35 739722.71 00:32:58.504 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x20 length 0x20 00:32:58.504 Malloc2p5 : 5.83 59.87 3.74 0.00 0.00 485946.88 9115.46 732096.70 00:32:58.504 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.504 Verification LBA range: start 0x0 length 0x20 00:32:58.504 Malloc2p6 : 5.79 56.69 3.54 0.00 0.00 506877.82 9532.51 716844.68 00:32:58.504 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x20 length 0x20 00:32:58.505 Malloc2p6 : 5.83 59.85 3.74 0.00 0.00 483307.51 9234.62 709218.68 00:32:58.505 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x20 00:32:58.505 Malloc2p7 : 5.79 56.67 3.54 0.00 0.00 504227.05 9592.09 693966.66 00:32:58.505 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x20 length 0x20 00:32:58.505 Malloc2p7 : 5.83 59.84 3.74 0.00 0.00 480578.36 9413.35 690153.66 00:32:58.505 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x100 00:32:58.505 TestPT : 5.99 101.03 6.31 0.00 0.00 1104785.11 72923.69 2333558.23 00:32:58.505 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x100 length 0x100 00:32:58.505 TestPT : 6.04 99.19 6.20 0.00 0.00 1129144.80 104380.97 2226794.12 00:32:58.505 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x200 00:32:58.505 raid0 : 6.04 110.50 6.91 0.00 0.00 1000946.81 54096.99 2486078.37 00:32:58.505 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x200 length 0x200 00:32:58.505 raid0 : 5.91 112.99 7.06 0.00 0.00 987986.06 54096.99 2394566.28 00:32:58.505 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x200 00:32:58.505 concat0 : 5.99 116.01 7.25 0.00 0.00 936965.58 26333.56 2486078.37 00:32:58.505 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x200 length 0x200 00:32:58.505 concat0 : 6.08 114.38 7.15 0.00 0.00 945729.83 49807.36 2394566.28 00:32:58.505 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x100 00:32:58.505 raid1 : 6.02 134.13 8.38 0.00 0.00 799902.94 21805.61 2470826.36 00:32:58.505 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x100 length 0x100 00:32:58.505 raid1 : 6.04 131.74 8.23 0.00 0.00 813710.05 27286.81 2379314.27 00:32:58.505 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x0 length 0x4e 00:32:58.505 AIO0 : 6.08 143.67 8.98 0.00 0.00 447961.55 1660.74 1426063.36 00:32:58.505 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:32:58.505 Verification LBA range: start 0x4e length 0x4e 00:32:58.505 AIO0 : 6.08 143.31 8.96 0.00 0.00 449020.52 2249.08 1365055.30 00:32:58.505 =================================================================================================================== 00:32:58.505 Total : 3247.83 202.99 0.00 0.00 679822.34 1660.74 2516582.40 00:33:01.033 00:33:01.033 real 0m10.736s 00:33:01.033 user 0m19.505s 00:33:01.033 sys 0m0.680s 00:33:01.033 ************************************ 00:33:01.033 END TEST bdev_verify_big_io 00:33:01.033 ************************************ 00:33:01.033 16:10:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:01.033 16:10:05 -- common/autotest_common.sh@10 -- # set +x 00:33:01.033 16:10:05 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:01.033 16:10:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:01.033 16:10:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:01.033 16:10:05 -- common/autotest_common.sh@10 -- # set +x 00:33:01.033 ************************************ 00:33:01.033 START TEST bdev_write_zeroes 00:33:01.033 ************************************ 00:33:01.033 16:10:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:01.033 [2024-07-22 16:10:05.173806] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:01.033 [2024-07-22 16:10:05.174017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68450 ] 00:33:01.290 [2024-07-22 16:10:05.352525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.549 [2024-07-22 16:10:05.627312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.807 [2024-07-22 16:10:06.065698] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:01.807 [2024-07-22 16:10:06.065822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:01.807 [2024-07-22 16:10:06.073623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:01.807 [2024-07-22 16:10:06.073680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:02.064 [2024-07-22 16:10:06.081706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:33:02.064 [2024-07-22 16:10:06.081929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:02.064 [2024-07-22 16:10:06.081957] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.064 [2024-07-22 16:10:06.295554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:33:02.064 [2024-07-22 16:10:06.295647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.064 [2024-07-22 16:10:06.295682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:33:02.064 [2024-07-22 16:10:06.295709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.064 [2024-07-22 16:10:06.298474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.064 [2024-07-22 16:10:06.298653] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:33:02.627 Running I/O for 1 seconds... 00:33:03.560 00:33:03.560 Latency(us) 00:33:03.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.561 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc0 : 1.05 4380.35 17.11 0.00 0.00 29202.90 960.70 47662.55 00:33:03.561 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc1p0 : 1.05 4373.53 17.08 0.00 0.00 29172.81 1117.09 46470.98 00:33:03.561 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc1p1 : 1.06 4366.70 17.06 0.00 0.00 29148.30 1266.04 45041.11 00:33:03.561 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p0 : 1.06 4359.70 17.03 0.00 0.00 29123.58 1154.33 44087.85 00:33:03.561 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p1 : 1.06 4353.05 17.00 0.00 0.00 29097.76 1109.64 42896.29 00:33:03.561 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p2 : 1.06 4346.12 16.98 0.00 0.00 29072.27 1094.75 41943.04 00:33:03.561 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p3 : 1.06 4339.09 16.95 0.00 0.00 29051.66 1094.75 40751.48 00:33:03.561 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p4 : 1.06 4331.70 16.92 0.00 0.00 29043.68 1087.30 39798.23 00:33:03.561 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p5 : 1.07 4324.62 16.89 0.00 0.00 29021.12 1087.30 38606.66 00:33:03.561 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p6 : 1.07 4317.93 16.87 0.00 0.00 28985.90 1072.41 37653.41 00:33:03.561 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 Malloc2p7 : 1.07 4311.03 16.84 0.00 0.00 28965.39 1064.96 36461.85 00:33:03.561 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 TestPT : 1.07 4304.10 16.81 0.00 0.00 28955.41 1109.64 35508.60 00:33:03.561 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 raid0 : 1.07 4296.04 16.78 0.00 0.00 28921.22 1876.71 33840.41 00:33:03.561 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 concat0 : 1.07 4288.00 16.75 0.00 0.00 28861.88 1854.37 32648.84 00:33:03.561 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 raid1 : 1.08 4277.78 16.71 0.00 0.00 28798.28 2993.80 32648.84 00:33:03.561 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:03.561 AIO0 : 1.08 4266.99 16.67 0.00 0.00 28710.99 1779.90 32887.16 00:33:03.561 =================================================================================================================== 00:33:03.561 Total : 69236.75 270.46 0.00 0.00 29008.35 960.70 47662.55 00:33:06.088 00:33:06.088 real 0m5.078s 00:33:06.088 user 0m4.355s 00:33:06.088 sys 0m0.575s 00:33:06.088 16:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:06.088 ************************************ 00:33:06.088 END TEST bdev_write_zeroes 00:33:06.088 ************************************ 00:33:06.088 16:10:10 -- common/autotest_common.sh@10 -- # set +x 00:33:06.088 16:10:10 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.088 16:10:10 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:06.088 16:10:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:06.088 16:10:10 -- common/autotest_common.sh@10 -- # set +x 00:33:06.088 ************************************ 00:33:06.088 START TEST bdev_json_nonenclosed 00:33:06.088 ************************************ 00:33:06.088 16:10:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:06.088 [2024-07-22 16:10:10.305511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:06.088 [2024-07-22 16:10:10.305732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68520 ] 00:33:06.346 [2024-07-22 16:10:10.481450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.604 [2024-07-22 16:10:10.781037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.604 [2024-07-22 16:10:10.781297] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:06.604 [2024-07-22 16:10:10.781335] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:07.169 00:33:07.169 real 0m1.020s 00:33:07.169 user 0m0.759s 00:33:07.169 sys 0m0.160s 00:33:07.169 16:10:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:07.169 16:10:11 -- common/autotest_common.sh@10 -- # set +x 00:33:07.169 ************************************ 00:33:07.169 END TEST bdev_json_nonenclosed 00:33:07.169 ************************************ 00:33:07.169 16:10:11 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:07.169 16:10:11 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:33:07.169 16:10:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:07.169 16:10:11 -- common/autotest_common.sh@10 -- # set +x 00:33:07.169 ************************************ 00:33:07.169 START TEST bdev_json_nonarray 00:33:07.169 ************************************ 00:33:07.169 16:10:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:07.169 [2024-07-22 16:10:11.397684] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:07.169 [2024-07-22 16:10:11.398067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68557 ] 00:33:07.427 [2024-07-22 16:10:11.581114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.685 [2024-07-22 16:10:11.883956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:07.685 [2024-07-22 16:10:11.884252] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:07.685 [2024-07-22 16:10:11.884285] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:08.251 ************************************ 00:33:08.251 END TEST bdev_json_nonarray 00:33:08.251 ************************************ 00:33:08.251 00:33:08.251 real 0m1.037s 00:33:08.251 user 0m0.757s 00:33:08.251 sys 0m0.180s 00:33:08.251 16:10:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.251 16:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.251 16:10:12 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:33:08.251 16:10:12 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:33:08.252 16:10:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:08.252 16:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:08.252 16:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 ************************************ 00:33:08.252 START TEST bdev_qos 00:33:08.252 ************************************ 00:33:08.252 16:10:12 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:33:08.252 Process qos testing pid: 68588 00:33:08.252 16:10:12 -- bdev/blockdev.sh@444 -- # QOS_PID=68588 00:33:08.252 16:10:12 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 68588' 00:33:08.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.252 16:10:12 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:33:08.252 16:10:12 -- bdev/blockdev.sh@447 -- # waitforlisten 68588 00:33:08.252 16:10:12 -- common/autotest_common.sh@819 -- # '[' -z 68588 ']' 00:33:08.252 16:10:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.252 16:10:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:08.252 16:10:12 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:33:08.252 16:10:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.252 16:10:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:08.252 16:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.252 [2024-07-22 16:10:12.452969] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:08.252 [2024-07-22 16:10:12.453181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68588 ] 00:33:08.510 [2024-07-22 16:10:12.616632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.769 [2024-07-22 16:10:12.876559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.334 16:10:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:09.334 16:10:13 -- common/autotest_common.sh@852 -- # return 0 00:33:09.335 16:10:13 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:33:09.335 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.335 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 Malloc_0 00:33:09.592 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.592 16:10:13 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:33:09.592 16:10:13 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:33:09.592 16:10:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:09.592 16:10:13 -- common/autotest_common.sh@889 -- # local i 00:33:09.592 16:10:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:09.592 16:10:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:09.592 16:10:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:09.592 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.592 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.592 16:10:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:33:09.592 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.592 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 [ 00:33:09.592 { 00:33:09.592 "name": "Malloc_0", 00:33:09.592 "aliases": [ 00:33:09.592 "545cb558-9af0-4310-a002-a9a922920cbf" 00:33:09.592 ], 00:33:09.592 "product_name": "Malloc disk", 00:33:09.592 "block_size": 512, 00:33:09.592 "num_blocks": 262144, 00:33:09.592 "uuid": "545cb558-9af0-4310-a002-a9a922920cbf", 00:33:09.592 "assigned_rate_limits": { 00:33:09.592 "rw_ios_per_sec": 0, 00:33:09.592 "rw_mbytes_per_sec": 0, 00:33:09.592 "r_mbytes_per_sec": 0, 00:33:09.592 "w_mbytes_per_sec": 0 00:33:09.592 }, 00:33:09.592 "claimed": false, 00:33:09.592 "zoned": false, 00:33:09.592 "supported_io_types": { 00:33:09.592 "read": true, 00:33:09.592 "write": true, 00:33:09.592 "unmap": true, 00:33:09.592 "write_zeroes": true, 00:33:09.592 "flush": true, 00:33:09.592 "reset": true, 00:33:09.592 "compare": false, 00:33:09.592 "compare_and_write": false, 00:33:09.592 "abort": true, 00:33:09.592 "nvme_admin": false, 00:33:09.592 "nvme_io": false 00:33:09.592 }, 00:33:09.592 "memory_domains": [ 00:33:09.592 { 00:33:09.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.592 "dma_device_type": 2 00:33:09.592 } 00:33:09.592 ], 00:33:09.592 "driver_specific": {} 00:33:09.592 } 00:33:09.592 ] 00:33:09.592 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.592 16:10:13 -- common/autotest_common.sh@895 -- # return 0 00:33:09.592 16:10:13 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:33:09.592 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.592 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 Null_1 00:33:09.592 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.592 16:10:13 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:33:09.592 16:10:13 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:33:09.592 16:10:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:09.592 16:10:13 -- common/autotest_common.sh@889 -- # local i 00:33:09.592 16:10:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:09.592 16:10:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:09.592 16:10:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:09.592 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.592 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.592 16:10:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:33:09.592 16:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.592 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.592 [ 00:33:09.592 { 00:33:09.592 "name": "Null_1", 00:33:09.592 "aliases": [ 00:33:09.592 "e3c3f99d-28f8-42f8-a8b6-7eee2720cde9" 00:33:09.592 ], 00:33:09.592 "product_name": "Null disk", 00:33:09.592 "block_size": 512, 00:33:09.592 "num_blocks": 262144, 00:33:09.592 "uuid": "e3c3f99d-28f8-42f8-a8b6-7eee2720cde9", 00:33:09.592 "assigned_rate_limits": { 00:33:09.592 "rw_ios_per_sec": 0, 00:33:09.592 "rw_mbytes_per_sec": 0, 00:33:09.592 "r_mbytes_per_sec": 0, 00:33:09.592 "w_mbytes_per_sec": 0 00:33:09.592 }, 00:33:09.592 "claimed": false, 00:33:09.592 "zoned": false, 00:33:09.592 "supported_io_types": { 00:33:09.592 "read": true, 00:33:09.592 "write": true, 00:33:09.593 "unmap": false, 00:33:09.593 "write_zeroes": true, 00:33:09.593 "flush": false, 00:33:09.593 "reset": true, 00:33:09.593 "compare": false, 00:33:09.593 "compare_and_write": false, 00:33:09.593 "abort": true, 00:33:09.593 "nvme_admin": false, 00:33:09.593 "nvme_io": false 00:33:09.593 }, 00:33:09.593 "driver_specific": {} 00:33:09.593 } 00:33:09.593 ] 00:33:09.593 16:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.593 16:10:13 -- common/autotest_common.sh@895 -- # return 0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@455 -- # qos_function_test 00:33:09.593 16:10:13 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:33:09.593 16:10:13 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:33:09.593 16:10:13 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:09.593 16:10:13 -- bdev/blockdev.sh@410 -- # local io_result=0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:33:09.593 16:10:13 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@375 -- # local iostat_result 00:33:09.593 16:10:13 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:33:09.593 16:10:13 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:33:09.593 16:10:13 -- bdev/blockdev.sh@376 -- # tail -1 00:33:09.593 Running I/O for 60 seconds... 00:33:14.857 16:10:18 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 60684.76 242739.05 0.00 0.00 245760.00 0.00 0.00 ' 00:33:14.857 16:10:18 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:33:14.857 16:10:18 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:33:14.857 16:10:18 -- bdev/blockdev.sh@378 -- # iostat_result=60684.76 00:33:14.857 16:10:18 -- bdev/blockdev.sh@383 -- # echo 60684 00:33:14.857 16:10:18 -- bdev/blockdev.sh@414 -- # io_result=60684 00:33:14.857 16:10:18 -- bdev/blockdev.sh@416 -- # iops_limit=15000 00:33:14.857 16:10:18 -- bdev/blockdev.sh@417 -- # '[' 15000 -gt 1000 ']' 00:33:14.857 16:10:18 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:33:14.857 16:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.857 16:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.857 16:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.857 16:10:18 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:33:14.857 16:10:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:33:14.857 16:10:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:14.857 16:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.857 ************************************ 00:33:14.857 START TEST bdev_qos_iops 00:33:14.857 ************************************ 00:33:14.857 16:10:18 -- common/autotest_common.sh@1104 -- # run_qos_test 15000 IOPS Malloc_0 00:33:14.857 16:10:18 -- bdev/blockdev.sh@387 -- # local qos_limit=15000 00:33:14.857 16:10:18 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:33:14.857 16:10:18 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:33:14.857 16:10:18 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:33:14.857 16:10:18 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:33:14.857 16:10:18 -- bdev/blockdev.sh@375 -- # local iostat_result 00:33:14.857 16:10:18 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:33:14.857 16:10:18 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:33:14.857 16:10:18 -- bdev/blockdev.sh@376 -- # tail -1 00:33:20.123 16:10:24 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 14999.26 59997.02 0.00 0.00 60780.00 0.00 0.00 ' 00:33:20.123 16:10:24 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:33:20.123 16:10:24 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:33:20.123 16:10:24 -- bdev/blockdev.sh@378 -- # iostat_result=14999.26 00:33:20.123 16:10:24 -- bdev/blockdev.sh@383 -- # echo 14999 00:33:20.123 ************************************ 00:33:20.123 END TEST bdev_qos_iops 00:33:20.123 ************************************ 00:33:20.123 16:10:24 -- bdev/blockdev.sh@390 -- # qos_result=14999 00:33:20.123 16:10:24 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:33:20.123 16:10:24 -- bdev/blockdev.sh@394 -- # lower_limit=13500 00:33:20.123 16:10:24 -- bdev/blockdev.sh@395 -- # upper_limit=16500 00:33:20.123 16:10:24 -- bdev/blockdev.sh@398 -- # '[' 14999 -lt 13500 ']' 00:33:20.123 16:10:24 -- bdev/blockdev.sh@398 -- # '[' 14999 -gt 16500 ']' 00:33:20.123 00:33:20.123 real 0m5.248s 00:33:20.123 user 0m0.133s 00:33:20.123 sys 0m0.039s 00:33:20.123 16:10:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:20.123 16:10:24 -- common/autotest_common.sh@10 -- # set +x 00:33:20.123 16:10:24 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:33:20.123 16:10:24 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:33:20.123 16:10:24 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:33:20.123 16:10:24 -- bdev/blockdev.sh@375 -- # local iostat_result 00:33:20.123 16:10:24 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:33:20.123 16:10:24 -- bdev/blockdev.sh@376 -- # grep Null_1 00:33:20.123 16:10:24 -- bdev/blockdev.sh@376 -- # tail -1 00:33:25.386 16:10:29 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 22888.44 91553.77 0.00 0.00 93184.00 0.00 0.00 ' 00:33:25.386 16:10:29 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:33:25.386 16:10:29 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:33:25.386 16:10:29 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:33:25.386 16:10:29 -- bdev/blockdev.sh@380 -- # iostat_result=93184.00 00:33:25.386 16:10:29 -- bdev/blockdev.sh@383 -- # echo 93184 00:33:25.386 16:10:29 -- bdev/blockdev.sh@425 -- # bw_limit=93184 00:33:25.386 16:10:29 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:33:25.386 16:10:29 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:33:25.386 16:10:29 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:33:25.386 16:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:25.386 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.386 16:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:25.386 16:10:29 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:33:25.386 16:10:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:33:25.386 16:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:25.386 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.386 ************************************ 00:33:25.386 START TEST bdev_qos_bw 00:33:25.386 ************************************ 00:33:25.386 16:10:29 -- common/autotest_common.sh@1104 -- # run_qos_test 9 BANDWIDTH Null_1 00:33:25.386 16:10:29 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:33:25.386 16:10:29 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:33:25.386 16:10:29 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:33:25.386 16:10:29 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:33:25.386 16:10:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:33:25.386 16:10:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:33:25.386 16:10:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:33:25.386 16:10:29 -- bdev/blockdev.sh@376 -- # grep Null_1 00:33:25.386 16:10:29 -- bdev/blockdev.sh@376 -- # tail -1 00:33:30.653 16:10:34 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2306.73 9226.90 0.00 0.00 9512.00 0.00 0.00 ' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@380 -- # iostat_result=9512.00 00:33:30.653 16:10:34 -- bdev/blockdev.sh@383 -- # echo 9512 00:33:30.653 ************************************ 00:33:30.653 END TEST bdev_qos_bw 00:33:30.653 ************************************ 00:33:30.653 16:10:34 -- bdev/blockdev.sh@390 -- # qos_result=9512 00:33:30.653 16:10:34 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:33:30.653 16:10:34 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:33:30.653 16:10:34 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:33:30.653 16:10:34 -- bdev/blockdev.sh@398 -- # '[' 9512 -lt 8294 ']' 00:33:30.653 16:10:34 -- bdev/blockdev.sh@398 -- # '[' 9512 -gt 10137 ']' 00:33:30.653 00:33:30.653 real 0m5.288s 00:33:30.653 user 0m0.135s 00:33:30.653 sys 0m0.027s 00:33:30.653 16:10:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.653 16:10:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.653 16:10:34 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:33:30.653 16:10:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:30.653 16:10:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.653 16:10:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:30.653 16:10:34 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:33:30.653 16:10:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:33:30.653 16:10:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:30.653 16:10:34 -- common/autotest_common.sh@10 -- # set +x 00:33:30.653 ************************************ 00:33:30.653 START TEST bdev_qos_ro_bw 00:33:30.653 ************************************ 00:33:30.653 16:10:34 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:33:30.653 16:10:34 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:33:30.653 16:10:34 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:33:30.653 16:10:34 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:33:30.653 16:10:34 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:33:30.653 16:10:34 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:33:30.653 16:10:34 -- bdev/blockdev.sh@375 -- # local iostat_result 00:33:30.653 16:10:34 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:33:30.653 16:10:34 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:33:30.654 16:10:34 -- bdev/blockdev.sh@376 -- # tail -1 00:33:35.913 16:10:40 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.40 2045.59 0.00 0.00 2064.00 0.00 0.00 ' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@380 -- # iostat_result=2064.00 00:33:35.913 16:10:40 -- bdev/blockdev.sh@383 -- # echo 2064 00:33:35.913 ************************************ 00:33:35.913 END TEST bdev_qos_ro_bw 00:33:35.913 ************************************ 00:33:35.913 16:10:40 -- bdev/blockdev.sh@390 -- # qos_result=2064 00:33:35.913 16:10:40 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:33:35.913 16:10:40 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:33:35.913 16:10:40 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:33:35.913 16:10:40 -- bdev/blockdev.sh@398 -- # '[' 2064 -lt 1843 ']' 00:33:35.913 16:10:40 -- bdev/blockdev.sh@398 -- # '[' 2064 -gt 2252 ']' 00:33:35.913 00:33:35.913 real 0m5.194s 00:33:35.913 user 0m0.126s 00:33:35.913 sys 0m0.047s 00:33:35.913 16:10:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.913 16:10:40 -- common/autotest_common.sh@10 -- # set +x 00:33:35.913 16:10:40 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:33:35.913 16:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:35.913 16:10:40 -- common/autotest_common.sh@10 -- # set +x 00:33:36.479 16:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.479 16:10:40 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:33:36.479 16:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:36.479 16:10:40 -- common/autotest_common.sh@10 -- # set +x 00:33:36.737 00:33:36.737 Latency(us) 00:33:36.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.737 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:33:36.737 Malloc_0 : 26.79 20408.14 79.72 0.00 0.00 12427.48 2502.28 503316.48 00:33:36.737 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:33:36.737 Null_1 : 27.04 21196.36 82.80 0.00 0.00 12046.93 863.88 243078.98 00:33:36.737 =================================================================================================================== 00:33:36.737 Total : 41604.50 162.52 0.00 0.00 12232.75 863.88 503316.48 00:33:36.737 0 00:33:36.737 16:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:36.737 16:10:40 -- bdev/blockdev.sh@459 -- # killprocess 68588 00:33:36.737 16:10:40 -- common/autotest_common.sh@926 -- # '[' -z 68588 ']' 00:33:36.737 16:10:40 -- common/autotest_common.sh@930 -- # kill -0 68588 00:33:36.737 16:10:40 -- common/autotest_common.sh@931 -- # uname 00:33:36.737 16:10:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:36.737 16:10:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68588 00:33:36.737 killing process with pid 68588 00:33:36.737 Received shutdown signal, test time was about 27.085393 seconds 00:33:36.737 00:33:36.737 Latency(us) 00:33:36.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.737 =================================================================================================================== 00:33:36.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.737 16:10:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:36.737 16:10:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:36.737 16:10:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68588' 00:33:36.737 16:10:40 -- common/autotest_common.sh@945 -- # kill 68588 00:33:36.737 16:10:40 -- common/autotest_common.sh@950 -- # wait 68588 00:33:38.679 16:10:42 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:33:38.679 00:33:38.679 real 0m30.087s 00:33:38.679 user 0m30.857s 00:33:38.679 sys 0m0.809s 00:33:38.679 ************************************ 00:33:38.679 END TEST bdev_qos 00:33:38.679 ************************************ 00:33:38.679 16:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.679 16:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.679 16:10:42 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:33:38.679 16:10:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:38.679 16:10:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:38.679 16:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.679 ************************************ 00:33:38.679 START TEST bdev_qd_sampling 00:33:38.679 ************************************ 00:33:38.679 16:10:42 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:33:38.679 16:10:42 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:33:38.679 Process bdev QD sampling period testing pid: 69006 00:33:38.679 16:10:42 -- bdev/blockdev.sh@539 -- # QD_PID=69006 00:33:38.679 16:10:42 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:33:38.679 16:10:42 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 69006' 00:33:38.679 16:10:42 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:33:38.679 16:10:42 -- bdev/blockdev.sh@542 -- # waitforlisten 69006 00:33:38.679 16:10:42 -- common/autotest_common.sh@819 -- # '[' -z 69006 ']' 00:33:38.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.679 16:10:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.679 16:10:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:38.679 16:10:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.679 16:10:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:38.679 16:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.679 [2024-07-22 16:10:42.601366] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:38.679 [2024-07-22 16:10:42.601552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69006 ] 00:33:38.679 [2024-07-22 16:10:42.781048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:38.936 [2024-07-22 16:10:43.102414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.936 [2024-07-22 16:10:43.102436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.502 16:10:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:39.502 16:10:43 -- common/autotest_common.sh@852 -- # return 0 00:33:39.502 16:10:43 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:33:39.502 16:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.502 16:10:43 -- common/autotest_common.sh@10 -- # set +x 00:33:39.502 Malloc_QD 00:33:39.502 16:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.502 16:10:43 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:33:39.502 16:10:43 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:33:39.502 16:10:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:39.502 16:10:43 -- common/autotest_common.sh@889 -- # local i 00:33:39.502 16:10:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:39.502 16:10:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:39.502 16:10:43 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:39.502 16:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.502 16:10:43 -- common/autotest_common.sh@10 -- # set +x 00:33:39.760 16:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.760 16:10:43 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:33:39.760 16:10:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:39.760 16:10:43 -- common/autotest_common.sh@10 -- # set +x 00:33:39.760 [ 00:33:39.760 { 00:33:39.760 "name": "Malloc_QD", 00:33:39.760 "aliases": [ 00:33:39.760 "b467ba9c-d475-4e4a-8fb0-fb1a645b7078" 00:33:39.760 ], 00:33:39.760 "product_name": "Malloc disk", 00:33:39.760 "block_size": 512, 00:33:39.760 "num_blocks": 262144, 00:33:39.760 "uuid": "b467ba9c-d475-4e4a-8fb0-fb1a645b7078", 00:33:39.760 "assigned_rate_limits": { 00:33:39.760 "rw_ios_per_sec": 0, 00:33:39.760 "rw_mbytes_per_sec": 0, 00:33:39.760 "r_mbytes_per_sec": 0, 00:33:39.760 "w_mbytes_per_sec": 0 00:33:39.760 }, 00:33:39.760 "claimed": false, 00:33:39.760 "zoned": false, 00:33:39.760 "supported_io_types": { 00:33:39.760 "read": true, 00:33:39.760 "write": true, 00:33:39.760 "unmap": true, 00:33:39.760 "write_zeroes": true, 00:33:39.760 "flush": true, 00:33:39.760 "reset": true, 00:33:39.760 "compare": false, 00:33:39.760 "compare_and_write": false, 00:33:39.760 "abort": true, 00:33:39.760 "nvme_admin": false, 00:33:39.760 "nvme_io": false 00:33:39.760 }, 00:33:39.760 "memory_domains": [ 00:33:39.760 { 00:33:39.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.760 "dma_device_type": 2 00:33:39.760 } 00:33:39.760 ], 00:33:39.760 "driver_specific": {} 00:33:39.760 } 00:33:39.760 ] 00:33:39.760 16:10:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:39.760 16:10:43 -- common/autotest_common.sh@895 -- # return 0 00:33:39.760 16:10:43 -- bdev/blockdev.sh@548 -- # sleep 2 00:33:39.760 16:10:43 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:39.760 Running I/O for 5 seconds... 00:33:41.658 16:10:45 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:33:41.658 16:10:45 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:33:41.658 16:10:45 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:33:41.658 16:10:45 -- bdev/blockdev.sh@519 -- # local iostats 00:33:41.658 16:10:45 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:33:41.658 16:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.658 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:33:41.658 16:10:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.658 16:10:45 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:33:41.658 16:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.658 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:33:41.658 16:10:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.658 16:10:45 -- bdev/blockdev.sh@523 -- # iostats='{ 00:33:41.658 "tick_rate": 2200000000, 00:33:41.658 "ticks": 1943026569040, 00:33:41.658 "bdevs": [ 00:33:41.658 { 00:33:41.658 "name": "Malloc_QD", 00:33:41.658 "bytes_read": 527471104, 00:33:41.658 "num_read_ops": 128771, 00:33:41.658 "bytes_written": 0, 00:33:41.658 "num_write_ops": 0, 00:33:41.658 "bytes_unmapped": 0, 00:33:41.658 "num_unmap_ops": 0, 00:33:41.658 "bytes_copied": 0, 00:33:41.658 "num_copy_ops": 0, 00:33:41.658 "read_latency_ticks": 2118537009985, 00:33:41.658 "max_read_latency_ticks": 37867642, 00:33:41.658 "min_read_latency_ticks": 397224, 00:33:41.658 "write_latency_ticks": 0, 00:33:41.658 "max_write_latency_ticks": 0, 00:33:41.658 "min_write_latency_ticks": 0, 00:33:41.658 "unmap_latency_ticks": 0, 00:33:41.658 "max_unmap_latency_ticks": 0, 00:33:41.658 "min_unmap_latency_ticks": 0, 00:33:41.658 "copy_latency_ticks": 0, 00:33:41.658 "max_copy_latency_ticks": 0, 00:33:41.658 "min_copy_latency_ticks": 0, 00:33:41.658 "io_error": {}, 00:33:41.658 "queue_depth_polling_period": 10, 00:33:41.658 "queue_depth": 512, 00:33:41.658 "io_time": 30, 00:33:41.658 "weighted_io_time": 15360 00:33:41.658 } 00:33:41.658 ] 00:33:41.658 }' 00:33:41.658 16:10:45 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:33:41.658 16:10:45 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:33:41.658 16:10:45 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:33:41.658 16:10:45 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:33:41.658 16:10:45 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:33:41.658 16:10:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:41.658 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:33:41.658 00:33:41.658 Latency(us) 00:33:41.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.658 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:33:41.658 Malloc_QD : 1.92 33403.96 130.48 0.00 0.00 7639.48 1288.38 17277.67 00:33:41.658 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:33:41.658 Malloc_QD : 1.92 35653.80 139.27 0.00 0.00 7157.79 1005.38 11379.43 00:33:41.658 =================================================================================================================== 00:33:41.658 Total : 69057.76 269.76 0.00 0.00 7390.75 1005.38 17277.67 00:33:41.917 0 00:33:41.917 16:10:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:41.917 16:10:45 -- bdev/blockdev.sh@552 -- # killprocess 69006 00:33:41.917 16:10:45 -- common/autotest_common.sh@926 -- # '[' -z 69006 ']' 00:33:41.917 16:10:45 -- common/autotest_common.sh@930 -- # kill -0 69006 00:33:41.917 16:10:45 -- common/autotest_common.sh@931 -- # uname 00:33:41.917 16:10:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:41.917 16:10:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69006 00:33:41.917 killing process with pid 69006 00:33:41.917 Received shutdown signal, test time was about 2.091629 seconds 00:33:41.917 00:33:41.917 Latency(us) 00:33:41.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.917 =================================================================================================================== 00:33:41.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.917 16:10:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:41.917 16:10:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:41.917 16:10:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69006' 00:33:41.917 16:10:46 -- common/autotest_common.sh@945 -- # kill 69006 00:33:41.917 16:10:46 -- common/autotest_common.sh@950 -- # wait 69006 00:33:43.293 ************************************ 00:33:43.293 END TEST bdev_qd_sampling 00:33:43.293 ************************************ 00:33:43.293 16:10:47 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:33:43.293 00:33:43.293 real 0m5.028s 00:33:43.293 user 0m9.106s 00:33:43.293 sys 0m0.575s 00:33:43.293 16:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.293 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:33:43.550 16:10:47 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:33:43.550 16:10:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:43.550 16:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.550 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:33:43.550 ************************************ 00:33:43.550 START TEST bdev_error 00:33:43.550 ************************************ 00:33:43.550 16:10:47 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:33:43.550 16:10:47 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:33:43.550 16:10:47 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:33:43.550 16:10:47 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:33:43.550 16:10:47 -- bdev/blockdev.sh@470 -- # ERR_PID=69093 00:33:43.550 Process error testing pid: 69093 00:33:43.550 16:10:47 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 69093' 00:33:43.550 16:10:47 -- bdev/blockdev.sh@472 -- # waitforlisten 69093 00:33:43.550 16:10:47 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:33:43.550 16:10:47 -- common/autotest_common.sh@819 -- # '[' -z 69093 ']' 00:33:43.550 16:10:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.550 16:10:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:43.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.550 16:10:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.550 16:10:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:43.550 16:10:47 -- common/autotest_common.sh@10 -- # set +x 00:33:43.550 [2024-07-22 16:10:47.663389] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:43.550 [2024-07-22 16:10:47.663538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69093 ] 00:33:43.808 [2024-07-22 16:10:47.831887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.065 [2024-07-22 16:10:48.132319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.634 16:10:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:44.634 16:10:48 -- common/autotest_common.sh@852 -- # return 0 00:33:44.634 16:10:48 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:33:44.634 16:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.634 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:33:44.634 Dev_1 00:33:44.634 16:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.634 16:10:48 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:33:44.634 16:10:48 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:33:44.634 16:10:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:44.634 16:10:48 -- common/autotest_common.sh@889 -- # local i 00:33:44.634 16:10:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:44.634 16:10:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:44.634 16:10:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:44.634 16:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.634 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:33:44.634 16:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.634 16:10:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:33:44.634 16:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.634 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:33:44.634 [ 00:33:44.634 { 00:33:44.634 "name": "Dev_1", 00:33:44.634 "aliases": [ 00:33:44.634 "2b8e49a8-c204-4522-b877-bafb2a2231b0" 00:33:44.634 ], 00:33:44.634 "product_name": "Malloc disk", 00:33:44.634 "block_size": 512, 00:33:44.634 "num_blocks": 262144, 00:33:44.634 "uuid": "2b8e49a8-c204-4522-b877-bafb2a2231b0", 00:33:44.634 "assigned_rate_limits": { 00:33:44.634 "rw_ios_per_sec": 0, 00:33:44.634 "rw_mbytes_per_sec": 0, 00:33:44.634 "r_mbytes_per_sec": 0, 00:33:44.634 "w_mbytes_per_sec": 0 00:33:44.634 }, 00:33:44.634 "claimed": false, 00:33:44.634 "zoned": false, 00:33:44.634 "supported_io_types": { 00:33:44.634 "read": true, 00:33:44.634 "write": true, 00:33:44.634 "unmap": true, 00:33:44.634 "write_zeroes": true, 00:33:44.634 "flush": true, 00:33:44.634 "reset": true, 00:33:44.634 "compare": false, 00:33:44.634 "compare_and_write": false, 00:33:44.634 "abort": true, 00:33:44.634 "nvme_admin": false, 00:33:44.634 "nvme_io": false 00:33:44.634 }, 00:33:44.634 "memory_domains": [ 00:33:44.634 { 00:33:44.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.634 "dma_device_type": 2 00:33:44.634 } 00:33:44.634 ], 00:33:44.634 "driver_specific": {} 00:33:44.634 } 00:33:44.634 ] 00:33:44.634 16:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.634 16:10:48 -- common/autotest_common.sh@895 -- # return 0 00:33:44.634 16:10:48 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:33:44.634 16:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.634 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:33:44.634 true 00:33:44.634 16:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.634 16:10:48 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:33:44.634 16:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.634 16:10:48 -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 Dev_2 00:33:44.896 16:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.896 16:10:49 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:33:44.896 16:10:49 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:33:44.896 16:10:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:44.896 16:10:49 -- common/autotest_common.sh@889 -- # local i 00:33:44.896 16:10:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:44.896 16:10:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:44.896 16:10:49 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:44.896 16:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.896 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 16:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.896 16:10:49 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:33:44.896 16:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.896 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 [ 00:33:44.896 { 00:33:44.896 "name": "Dev_2", 00:33:44.896 "aliases": [ 00:33:44.896 "f5f099b3-b029-40a4-8ca8-96e13d281337" 00:33:44.896 ], 00:33:44.896 "product_name": "Malloc disk", 00:33:44.896 "block_size": 512, 00:33:44.896 "num_blocks": 262144, 00:33:44.896 "uuid": "f5f099b3-b029-40a4-8ca8-96e13d281337", 00:33:44.896 "assigned_rate_limits": { 00:33:44.896 "rw_ios_per_sec": 0, 00:33:44.896 "rw_mbytes_per_sec": 0, 00:33:44.896 "r_mbytes_per_sec": 0, 00:33:44.896 "w_mbytes_per_sec": 0 00:33:44.896 }, 00:33:44.896 "claimed": false, 00:33:44.896 "zoned": false, 00:33:44.896 "supported_io_types": { 00:33:44.896 "read": true, 00:33:44.896 "write": true, 00:33:44.896 "unmap": true, 00:33:44.896 "write_zeroes": true, 00:33:44.896 "flush": true, 00:33:44.896 "reset": true, 00:33:44.896 "compare": false, 00:33:44.896 "compare_and_write": false, 00:33:44.896 "abort": true, 00:33:44.896 "nvme_admin": false, 00:33:44.896 "nvme_io": false 00:33:44.896 }, 00:33:44.896 "memory_domains": [ 00:33:44.896 { 00:33:44.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:44.896 "dma_device_type": 2 00:33:44.896 } 00:33:44.896 ], 00:33:44.896 "driver_specific": {} 00:33:44.896 } 00:33:44.896 ] 00:33:44.896 16:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.896 16:10:49 -- common/autotest_common.sh@895 -- # return 0 00:33:44.896 16:10:49 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:33:44.896 16:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:44.896 16:10:49 -- common/autotest_common.sh@10 -- # set +x 00:33:44.896 16:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:44.896 16:10:49 -- bdev/blockdev.sh@482 -- # sleep 1 00:33:44.896 16:10:49 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:33:45.155 Running I/O for 5 seconds... 00:33:46.090 16:10:50 -- bdev/blockdev.sh@485 -- # kill -0 69093 00:33:46.090 Process is existed as continue on error is set. Pid: 69093 00:33:46.090 16:10:50 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 69093' 00:33:46.090 16:10:50 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:33:46.090 16:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.090 16:10:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.090 16:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.090 16:10:50 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:33:46.090 16:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:46.090 16:10:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.090 Timeout while waiting for response: 00:33:46.090 00:33:46.090 00:33:46.348 16:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:46.348 16:10:50 -- bdev/blockdev.sh@495 -- # sleep 5 00:33:50.535 00:33:50.535 Latency(us) 00:33:50.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.535 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:33:50.535 EE_Dev_1 : 0.89 31686.92 123.78 5.62 0.00 501.23 171.29 1072.41 00:33:50.535 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:33:50.535 Dev_2 : 5.00 65374.56 255.37 0.00 0.00 241.22 70.28 352702.84 00:33:50.535 =================================================================================================================== 00:33:50.535 Total : 97061.48 379.15 5.62 0.00 261.85 70.28 352702.84 00:33:51.471 16:10:55 -- bdev/blockdev.sh@497 -- # killprocess 69093 00:33:51.471 16:10:55 -- common/autotest_common.sh@926 -- # '[' -z 69093 ']' 00:33:51.471 16:10:55 -- common/autotest_common.sh@930 -- # kill -0 69093 00:33:51.471 16:10:55 -- common/autotest_common.sh@931 -- # uname 00:33:51.471 16:10:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:51.471 16:10:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69093 00:33:51.471 killing process with pid 69093 00:33:51.471 Received shutdown signal, test time was about 5.000000 seconds 00:33:51.471 00:33:51.471 Latency(us) 00:33:51.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.471 =================================================================================================================== 00:33:51.471 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.471 16:10:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:33:51.471 16:10:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:33:51.471 16:10:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69093' 00:33:51.471 16:10:55 -- common/autotest_common.sh@945 -- # kill 69093 00:33:51.471 16:10:55 -- common/autotest_common.sh@950 -- # wait 69093 00:33:53.375 16:10:57 -- bdev/blockdev.sh@501 -- # ERR_PID=69206 00:33:53.375 16:10:57 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:33:53.375 16:10:57 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 69206' 00:33:53.375 Process error testing pid: 69206 00:33:53.375 16:10:57 -- bdev/blockdev.sh@503 -- # waitforlisten 69206 00:33:53.375 16:10:57 -- common/autotest_common.sh@819 -- # '[' -z 69206 ']' 00:33:53.375 16:10:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.375 16:10:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:53.375 16:10:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.375 16:10:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:53.375 16:10:57 -- common/autotest_common.sh@10 -- # set +x 00:33:53.375 [2024-07-22 16:10:57.231585] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:53.375 [2024-07-22 16:10:57.232090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69206 ] 00:33:53.375 [2024-07-22 16:10:57.399599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.632 [2024-07-22 16:10:57.692822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.198 16:10:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:54.198 16:10:58 -- common/autotest_common.sh@852 -- # return 0 00:33:54.198 16:10:58 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:33:54.198 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.198 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 Dev_1 00:33:54.198 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.198 16:10:58 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:33:54.198 16:10:58 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:33:54.198 16:10:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:54.198 16:10:58 -- common/autotest_common.sh@889 -- # local i 00:33:54.198 16:10:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:54.198 16:10:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:54.198 16:10:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:54.198 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.198 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.198 16:10:58 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:33:54.198 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.198 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 [ 00:33:54.198 { 00:33:54.198 "name": "Dev_1", 00:33:54.198 "aliases": [ 00:33:54.198 "eb9c632a-511b-4fa6-a34e-5bcbf770a2f9" 00:33:54.198 ], 00:33:54.198 "product_name": "Malloc disk", 00:33:54.198 "block_size": 512, 00:33:54.198 "num_blocks": 262144, 00:33:54.198 "uuid": "eb9c632a-511b-4fa6-a34e-5bcbf770a2f9", 00:33:54.198 "assigned_rate_limits": { 00:33:54.198 "rw_ios_per_sec": 0, 00:33:54.198 "rw_mbytes_per_sec": 0, 00:33:54.198 "r_mbytes_per_sec": 0, 00:33:54.198 "w_mbytes_per_sec": 0 00:33:54.198 }, 00:33:54.198 "claimed": false, 00:33:54.198 "zoned": false, 00:33:54.198 "supported_io_types": { 00:33:54.198 "read": true, 00:33:54.198 "write": true, 00:33:54.198 "unmap": true, 00:33:54.198 "write_zeroes": true, 00:33:54.198 "flush": true, 00:33:54.198 "reset": true, 00:33:54.198 "compare": false, 00:33:54.198 "compare_and_write": false, 00:33:54.198 "abort": true, 00:33:54.198 "nvme_admin": false, 00:33:54.198 "nvme_io": false 00:33:54.198 }, 00:33:54.198 "memory_domains": [ 00:33:54.198 { 00:33:54.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.198 "dma_device_type": 2 00:33:54.198 } 00:33:54.198 ], 00:33:54.198 "driver_specific": {} 00:33:54.198 } 00:33:54.198 ] 00:33:54.198 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.198 16:10:58 -- common/autotest_common.sh@895 -- # return 0 00:33:54.198 16:10:58 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:33:54.198 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.198 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.198 true 00:33:54.198 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.198 16:10:58 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:33:54.199 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.199 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 Dev_2 00:33:54.457 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.457 16:10:58 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:33:54.457 16:10:58 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:33:54.457 16:10:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:54.457 16:10:58 -- common/autotest_common.sh@889 -- # local i 00:33:54.457 16:10:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:54.457 16:10:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:54.457 16:10:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:54.457 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.457 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.457 16:10:58 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:33:54.457 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.457 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 [ 00:33:54.457 { 00:33:54.457 "name": "Dev_2", 00:33:54.457 "aliases": [ 00:33:54.457 "07326ea7-b5c5-4e5b-a5a3-48055422b707" 00:33:54.457 ], 00:33:54.457 "product_name": "Malloc disk", 00:33:54.457 "block_size": 512, 00:33:54.457 "num_blocks": 262144, 00:33:54.457 "uuid": "07326ea7-b5c5-4e5b-a5a3-48055422b707", 00:33:54.457 "assigned_rate_limits": { 00:33:54.457 "rw_ios_per_sec": 0, 00:33:54.457 "rw_mbytes_per_sec": 0, 00:33:54.457 "r_mbytes_per_sec": 0, 00:33:54.457 "w_mbytes_per_sec": 0 00:33:54.457 }, 00:33:54.457 "claimed": false, 00:33:54.457 "zoned": false, 00:33:54.457 "supported_io_types": { 00:33:54.457 "read": true, 00:33:54.457 "write": true, 00:33:54.457 "unmap": true, 00:33:54.457 "write_zeroes": true, 00:33:54.457 "flush": true, 00:33:54.457 "reset": true, 00:33:54.457 "compare": false, 00:33:54.457 "compare_and_write": false, 00:33:54.457 "abort": true, 00:33:54.457 "nvme_admin": false, 00:33:54.457 "nvme_io": false 00:33:54.457 }, 00:33:54.457 "memory_domains": [ 00:33:54.457 { 00:33:54.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.457 "dma_device_type": 2 00:33:54.457 } 00:33:54.457 ], 00:33:54.457 "driver_specific": {} 00:33:54.457 } 00:33:54.457 ] 00:33:54.457 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.457 16:10:58 -- common/autotest_common.sh@895 -- # return 0 00:33:54.457 16:10:58 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:33:54.457 16:10:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:54.457 16:10:58 -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 16:10:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:54.457 16:10:58 -- bdev/blockdev.sh@513 -- # NOT wait 69206 00:33:54.457 16:10:58 -- common/autotest_common.sh@640 -- # local es=0 00:33:54.457 16:10:58 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 69206 00:33:54.457 16:10:58 -- common/autotest_common.sh@628 -- # local arg=wait 00:33:54.457 16:10:58 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:33:54.457 16:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:54.457 16:10:58 -- common/autotest_common.sh@632 -- # type -t wait 00:33:54.457 16:10:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:33:54.457 16:10:58 -- common/autotest_common.sh@643 -- # wait 69206 00:33:54.457 Running I/O for 5 seconds... 00:33:54.457 task offset: 7584 on job bdev=EE_Dev_1 fails 00:33:54.457 00:33:54.457 Latency(us) 00:33:54.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.457 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:33:54.457 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:33:54.457 EE_Dev_1 : 0.00 22988.51 89.80 5224.66 0.00 461.28 228.07 841.54 00:33:54.457 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:33:54.457 Dev_2 : 0.00 16359.92 63.91 0.00 0.00 658.73 156.39 1206.46 00:33:54.457 =================================================================================================================== 00:33:54.457 Total : 39348.42 153.70 5224.66 0.00 568.38 156.39 1206.46 00:33:54.457 [2024-07-22 16:10:58.692750] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:54.457 request: 00:33:54.457 { 00:33:54.457 "method": "perform_tests", 00:33:54.457 "req_id": 1 00:33:54.457 } 00:33:54.457 Got JSON-RPC error response 00:33:54.457 response: 00:33:54.457 { 00:33:54.457 "code": -32603, 00:33:54.457 "message": "bdevperf failed with error Operation not permitted" 00:33:54.457 } 00:33:56.988 16:11:00 -- common/autotest_common.sh@643 -- # es=255 00:33:56.988 16:11:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:33:56.988 16:11:00 -- common/autotest_common.sh@652 -- # es=127 00:33:56.988 16:11:00 -- common/autotest_common.sh@653 -- # case "$es" in 00:33:56.988 16:11:00 -- common/autotest_common.sh@660 -- # es=1 00:33:56.988 16:11:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:33:56.988 00:33:56.988 real 0m13.079s 00:33:56.988 user 0m13.112s 00:33:56.988 sys 0m1.167s 00:33:56.988 16:11:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.988 ************************************ 00:33:56.988 END TEST bdev_error 00:33:56.988 ************************************ 00:33:56.988 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:33:56.988 16:11:00 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:33:56.988 16:11:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:56.988 16:11:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:56.988 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:33:56.988 ************************************ 00:33:56.988 START TEST bdev_stat 00:33:56.988 ************************************ 00:33:56.988 16:11:00 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:33:56.988 16:11:00 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:33:56.988 16:11:00 -- bdev/blockdev.sh@594 -- # STAT_PID=69265 00:33:56.988 Process Bdev IO statistics testing pid: 69265 00:33:56.988 16:11:00 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 69265' 00:33:56.988 16:11:00 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:33:56.988 16:11:00 -- bdev/blockdev.sh@597 -- # waitforlisten 69265 00:33:56.988 16:11:00 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:33:56.988 16:11:00 -- common/autotest_common.sh@819 -- # '[' -z 69265 ']' 00:33:56.988 16:11:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.988 16:11:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:56.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.989 16:11:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.989 16:11:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:56.989 16:11:00 -- common/autotest_common.sh@10 -- # set +x 00:33:56.989 [2024-07-22 16:11:00.824060] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:56.989 [2024-07-22 16:11:00.824333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69265 ] 00:33:56.989 [2024-07-22 16:11:01.010458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:57.247 [2024-07-22 16:11:01.275120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.247 [2024-07-22 16:11:01.275151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.505 16:11:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:57.505 16:11:01 -- common/autotest_common.sh@852 -- # return 0 00:33:57.505 16:11:01 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:33:57.505 16:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.505 16:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.763 Malloc_STAT 00:33:57.763 16:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.763 16:11:01 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:33:57.763 16:11:01 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:33:57.763 16:11:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:33:57.763 16:11:01 -- common/autotest_common.sh@889 -- # local i 00:33:57.763 16:11:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:33:57.763 16:11:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:33:57.763 16:11:01 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:33:57.763 16:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.763 16:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.763 16:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.763 16:11:01 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:33:57.763 16:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.763 16:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.763 [ 00:33:57.763 { 00:33:57.763 "name": "Malloc_STAT", 00:33:57.763 "aliases": [ 00:33:57.763 "4d27787c-a3df-4edd-8496-83a89a28f06e" 00:33:57.763 ], 00:33:57.763 "product_name": "Malloc disk", 00:33:57.763 "block_size": 512, 00:33:57.763 "num_blocks": 262144, 00:33:57.763 "uuid": "4d27787c-a3df-4edd-8496-83a89a28f06e", 00:33:57.763 "assigned_rate_limits": { 00:33:57.763 "rw_ios_per_sec": 0, 00:33:57.763 "rw_mbytes_per_sec": 0, 00:33:57.763 "r_mbytes_per_sec": 0, 00:33:57.763 "w_mbytes_per_sec": 0 00:33:57.763 }, 00:33:57.763 "claimed": false, 00:33:57.763 "zoned": false, 00:33:57.763 "supported_io_types": { 00:33:57.763 "read": true, 00:33:57.764 "write": true, 00:33:57.764 "unmap": true, 00:33:57.764 "write_zeroes": true, 00:33:57.764 "flush": true, 00:33:57.764 "reset": true, 00:33:57.764 "compare": false, 00:33:57.764 "compare_and_write": false, 00:33:57.764 "abort": true, 00:33:57.764 "nvme_admin": false, 00:33:57.764 "nvme_io": false 00:33:57.764 }, 00:33:57.764 "memory_domains": [ 00:33:57.764 { 00:33:57.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.764 "dma_device_type": 2 00:33:57.764 } 00:33:57.764 ], 00:33:57.764 "driver_specific": {} 00:33:57.764 } 00:33:57.764 ] 00:33:57.764 16:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.764 16:11:01 -- common/autotest_common.sh@895 -- # return 0 00:33:57.764 16:11:01 -- bdev/blockdev.sh@603 -- # sleep 2 00:33:57.764 16:11:01 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:58.022 Running I/O for 10 seconds... 00:33:59.920 16:11:03 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:33:59.920 16:11:03 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:33:59.920 16:11:03 -- bdev/blockdev.sh@558 -- # local iostats 00:33:59.920 16:11:03 -- bdev/blockdev.sh@559 -- # local io_count1 00:33:59.920 16:11:03 -- bdev/blockdev.sh@560 -- # local io_count2 00:33:59.920 16:11:03 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:33:59.920 16:11:03 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:33:59.920 16:11:03 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:33:59.920 16:11:03 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:33:59.920 16:11:03 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:33:59.920 16:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.920 16:11:03 -- common/autotest_common.sh@10 -- # set +x 00:33:59.920 16:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.920 16:11:03 -- bdev/blockdev.sh@566 -- # iostats='{ 00:33:59.920 "tick_rate": 2200000000, 00:33:59.920 "ticks": 1982896529626, 00:33:59.920 "bdevs": [ 00:33:59.920 { 00:33:59.920 "name": "Malloc_STAT", 00:33:59.920 "bytes_read": 455119360, 00:33:59.920 "num_read_ops": 111107, 00:33:59.920 "bytes_written": 0, 00:33:59.921 "num_write_ops": 0, 00:33:59.921 "bytes_unmapped": 0, 00:33:59.921 "num_unmap_ops": 0, 00:33:59.921 "bytes_copied": 0, 00:33:59.921 "num_copy_ops": 0, 00:33:59.921 "read_latency_ticks": 2115268108160, 00:33:59.921 "max_read_latency_ticks": 30551934, 00:33:59.921 "min_read_latency_ticks": 400610, 00:33:59.921 "write_latency_ticks": 0, 00:33:59.921 "max_write_latency_ticks": 0, 00:33:59.921 "min_write_latency_ticks": 0, 00:33:59.921 "unmap_latency_ticks": 0, 00:33:59.921 "max_unmap_latency_ticks": 0, 00:33:59.921 "min_unmap_latency_ticks": 0, 00:33:59.921 "copy_latency_ticks": 0, 00:33:59.921 "max_copy_latency_ticks": 0, 00:33:59.921 "min_copy_latency_ticks": 0, 00:33:59.921 "io_error": {} 00:33:59.921 } 00:33:59.921 ] 00:33:59.921 }' 00:33:59.921 16:11:03 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:33:59.921 16:11:03 -- bdev/blockdev.sh@567 -- # io_count1=111107 00:33:59.921 16:11:03 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:33:59.921 16:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.921 16:11:03 -- common/autotest_common.sh@10 -- # set +x 00:33:59.921 16:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.921 16:11:03 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:33:59.921 "tick_rate": 2200000000, 00:33:59.921 "ticks": 1982974467206, 00:33:59.921 "name": "Malloc_STAT", 00:33:59.921 "channels": [ 00:33:59.921 { 00:33:59.921 "thread_id": 2, 00:33:59.921 "bytes_read": 236978176, 00:33:59.921 "num_read_ops": 57856, 00:33:59.921 "bytes_written": 0, 00:33:59.921 "num_write_ops": 0, 00:33:59.921 "bytes_unmapped": 0, 00:33:59.921 "num_unmap_ops": 0, 00:33:59.921 "bytes_copied": 0, 00:33:59.921 "num_copy_ops": 0, 00:33:59.921 "read_latency_ticks": 1075472175389, 00:33:59.921 "max_read_latency_ticks": 23029748, 00:33:59.921 "min_read_latency_ticks": 15412428, 00:33:59.921 "write_latency_ticks": 0, 00:33:59.921 "max_write_latency_ticks": 0, 00:33:59.921 "min_write_latency_ticks": 0, 00:33:59.921 "unmap_latency_ticks": 0, 00:33:59.921 "max_unmap_latency_ticks": 0, 00:33:59.921 "min_unmap_latency_ticks": 0, 00:33:59.921 "copy_latency_ticks": 0, 00:33:59.921 "max_copy_latency_ticks": 0, 00:33:59.921 "min_copy_latency_ticks": 0 00:33:59.921 }, 00:33:59.921 { 00:33:59.921 "thread_id": 3, 00:33:59.921 "bytes_read": 226492416, 00:33:59.921 "num_read_ops": 55296, 00:33:59.921 "bytes_written": 0, 00:33:59.921 "num_write_ops": 0, 00:33:59.921 "bytes_unmapped": 0, 00:33:59.921 "num_unmap_ops": 0, 00:33:59.921 "bytes_copied": 0, 00:33:59.921 "num_copy_ops": 0, 00:33:59.921 "read_latency_ticks": 1078808466137, 00:33:59.921 "max_read_latency_ticks": 30551934, 00:33:59.921 "min_read_latency_ticks": 11872925, 00:33:59.921 "write_latency_ticks": 0, 00:33:59.921 "max_write_latency_ticks": 0, 00:33:59.921 "min_write_latency_ticks": 0, 00:33:59.921 "unmap_latency_ticks": 0, 00:33:59.921 "max_unmap_latency_ticks": 0, 00:33:59.921 "min_unmap_latency_ticks": 0, 00:33:59.921 "copy_latency_ticks": 0, 00:33:59.921 "max_copy_latency_ticks": 0, 00:33:59.921 "min_copy_latency_ticks": 0 00:33:59.921 } 00:33:59.921 ] 00:33:59.921 }' 00:33:59.921 16:11:03 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=57856 00:33:59.921 16:11:04 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=57856 00:33:59.921 16:11:04 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=55296 00:33:59.921 16:11:04 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=113152 00:33:59.921 16:11:04 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:33:59.921 16:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.921 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:33:59.921 16:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:59.921 16:11:04 -- bdev/blockdev.sh@575 -- # iostats='{ 00:33:59.921 "tick_rate": 2200000000, 00:33:59.921 "ticks": 1983070551945, 00:33:59.921 "bdevs": [ 00:33:59.921 { 00:33:59.921 "name": "Malloc_STAT", 00:33:59.921 "bytes_read": 473993728, 00:33:59.921 "num_read_ops": 115715, 00:33:59.921 "bytes_written": 0, 00:33:59.921 "num_write_ops": 0, 00:33:59.921 "bytes_unmapped": 0, 00:33:59.921 "num_unmap_ops": 0, 00:33:59.921 "bytes_copied": 0, 00:33:59.921 "num_copy_ops": 0, 00:33:59.921 "read_latency_ticks": 2202826242306, 00:33:59.921 "max_read_latency_ticks": 30551934, 00:33:59.921 "min_read_latency_ticks": 400610, 00:33:59.921 "write_latency_ticks": 0, 00:33:59.921 "max_write_latency_ticks": 0, 00:33:59.921 "min_write_latency_ticks": 0, 00:33:59.921 "unmap_latency_ticks": 0, 00:33:59.921 "max_unmap_latency_ticks": 0, 00:33:59.921 "min_unmap_latency_ticks": 0, 00:33:59.921 "copy_latency_ticks": 0, 00:33:59.921 "max_copy_latency_ticks": 0, 00:33:59.921 "min_copy_latency_ticks": 0, 00:33:59.921 "io_error": {} 00:33:59.921 } 00:33:59.921 ] 00:33:59.921 }' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@576 -- # io_count2=115715 00:33:59.921 16:11:04 -- bdev/blockdev.sh@581 -- # '[' 113152 -lt 111107 ']' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@581 -- # '[' 113152 -gt 115715 ']' 00:33:59.921 16:11:04 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:33:59.921 16:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:59.921 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:33:59.921 00:33:59.921 Latency(us) 00:33:59.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.921 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:33:59.921 Malloc_STAT : 2.01 30178.51 117.88 0.00 0.00 8448.06 2204.39 10485.76 00:33:59.921 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:33:59.921 Malloc_STAT : 2.01 28896.84 112.88 0.00 0.00 8834.29 1683.08 13941.29 00:33:59.921 =================================================================================================================== 00:33:59.921 Total : 59075.35 230.76 0.00 0.00 8637.01 1683.08 13941.29 00:34:00.179 0 00:34:00.179 16:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:00.179 16:11:04 -- bdev/blockdev.sh@607 -- # killprocess 69265 00:34:00.179 16:11:04 -- common/autotest_common.sh@926 -- # '[' -z 69265 ']' 00:34:00.179 16:11:04 -- common/autotest_common.sh@930 -- # kill -0 69265 00:34:00.179 16:11:04 -- common/autotest_common.sh@931 -- # uname 00:34:00.179 16:11:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:00.179 16:11:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69265 00:34:00.179 16:11:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:00.179 killing process with pid 69265 00:34:00.179 16:11:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:00.179 16:11:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69265' 00:34:00.179 16:11:04 -- common/autotest_common.sh@945 -- # kill 69265 00:34:00.179 Received shutdown signal, test time was about 2.178523 seconds 00:34:00.179 00:34:00.179 Latency(us) 00:34:00.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.179 =================================================================================================================== 00:34:00.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.179 16:11:04 -- common/autotest_common.sh@950 -- # wait 69265 00:34:01.553 16:11:05 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:34:01.554 00:34:01.554 real 0m5.023s 00:34:01.554 user 0m9.078s 00:34:01.554 sys 0m0.575s 00:34:01.554 ************************************ 00:34:01.554 END TEST bdev_stat 00:34:01.554 ************************************ 00:34:01.554 16:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.554 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.554 16:11:05 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:34:01.554 16:11:05 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:34:01.554 16:11:05 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:34:01.554 16:11:05 -- bdev/blockdev.sh@809 -- # cleanup 00:34:01.554 16:11:05 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:01.554 16:11:05 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:01.554 16:11:05 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:34:01.554 16:11:05 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:34:01.554 16:11:05 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:34:01.554 16:11:05 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:34:01.554 00:34:01.554 real 2m32.416s 00:34:01.554 user 6m3.856s 00:34:01.554 sys 0m24.119s 00:34:01.554 16:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.554 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.554 ************************************ 00:34:01.554 END TEST blockdev_general 00:34:01.554 ************************************ 00:34:01.812 16:11:05 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:34:01.812 16:11:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:01.812 16:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:01.812 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.812 ************************************ 00:34:01.812 START TEST bdev_raid 00:34:01.812 ************************************ 00:34:01.812 16:11:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:34:01.812 * Looking for test storage... 00:34:01.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:01.812 16:11:05 -- bdev/nbd_common.sh@6 -- # set -e 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@716 -- # uname -s 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:34:01.812 16:11:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:01.812 16:11:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:01.812 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.812 ************************************ 00:34:01.812 START TEST raid_function_test_raid0 00:34:01.812 ************************************ 00:34:01.812 16:11:05 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@86 -- # raid_pid=69405 00:34:01.812 Process raid pid: 69405 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 69405' 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@88 -- # waitforlisten 69405 /var/tmp/spdk-raid.sock 00:34:01.812 16:11:05 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:01.812 16:11:05 -- common/autotest_common.sh@819 -- # '[' -z 69405 ']' 00:34:01.812 16:11:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:01.812 16:11:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:01.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:01.812 16:11:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:01.812 16:11:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:01.812 16:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.812 [2024-07-22 16:11:06.030820] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:01.812 [2024-07-22 16:11:06.030980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:02.071 [2024-07-22 16:11:06.198657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.329 [2024-07-22 16:11:06.463954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.588 [2024-07-22 16:11:06.677982] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:02.847 16:11:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:02.847 16:11:07 -- common/autotest_common.sh@852 -- # return 0 00:34:02.847 16:11:07 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:34:02.847 16:11:07 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:34:02.847 16:11:07 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:34:02.847 16:11:07 -- bdev/bdev_raid.sh@70 -- # cat 00:34:02.847 16:11:07 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:34:03.442 [2024-07-22 16:11:07.440504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:03.442 [2024-07-22 16:11:07.443067] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:03.442 [2024-07-22 16:11:07.443168] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:34:03.442 [2024-07-22 16:11:07.443191] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:03.442 [2024-07-22 16:11:07.443352] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:34:03.442 [2024-07-22 16:11:07.443783] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:34:03.442 [2024-07-22 16:11:07.443801] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:34:03.442 [2024-07-22 16:11:07.444011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:03.442 Base_1 00:34:03.442 Base_2 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:34:03.442 16:11:07 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@12 -- # local i 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:03.442 16:11:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:34:03.713 [2024-07-22 16:11:07.964749] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:34:03.713 /dev/nbd0 00:34:03.971 16:11:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:03.971 16:11:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:03.971 16:11:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:34:03.971 16:11:08 -- common/autotest_common.sh@857 -- # local i 00:34:03.971 16:11:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:34:03.971 16:11:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:34:03.971 16:11:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:34:03.971 16:11:08 -- common/autotest_common.sh@861 -- # break 00:34:03.971 16:11:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:34:03.971 16:11:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:34:03.971 16:11:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:03.971 1+0 records in 00:34:03.971 1+0 records out 00:34:03.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051331 s, 8.0 MB/s 00:34:03.971 16:11:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:03.971 16:11:08 -- common/autotest_common.sh@874 -- # size=4096 00:34:03.971 16:11:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:03.971 16:11:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:34:03.971 16:11:08 -- common/autotest_common.sh@877 -- # return 0 00:34:03.971 16:11:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:03.971 16:11:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:03.971 16:11:08 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:34:03.971 16:11:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:03.971 16:11:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:04.230 { 00:34:04.230 "nbd_device": "/dev/nbd0", 00:34:04.230 "bdev_name": "raid" 00:34:04.230 } 00:34:04.230 ]' 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:04.230 { 00:34:04.230 "nbd_device": "/dev/nbd0", 00:34:04.230 "bdev_name": "raid" 00:34:04.230 } 00:34:04.230 ]' 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@65 -- # count=1 00:34:04.230 16:11:08 -- bdev/nbd_common.sh@66 -- # echo 1 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@98 -- # count=1 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@20 -- # local blksize 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:34:04.230 4096+0 records in 00:34:04.230 4096+0 records out 00:34:04.230 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0247346 s, 84.8 MB/s 00:34:04.230 16:11:08 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:34:04.488 4096+0 records in 00:34:04.489 4096+0 records out 00:34:04.489 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.348254 s, 6.0 MB/s 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:34:04.489 128+0 records in 00:34:04.489 128+0 records out 00:34:04.489 65536 bytes (66 kB, 64 KiB) copied, 0.000349711 s, 187 MB/s 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:34:04.489 2035+0 records in 00:34:04.489 2035+0 records out 00:34:04.489 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00598615 s, 174 MB/s 00:34:04.489 16:11:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:34:04.747 456+0 records in 00:34:04.747 456+0 records out 00:34:04.747 233472 bytes (233 kB, 228 KiB) copied, 0.00201473 s, 116 MB/s 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@53 -- # return 0 00:34:04.747 16:11:08 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@51 -- # local i 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:04.747 16:11:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:05.004 [2024-07-22 16:11:09.074398] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@41 -- # break 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@45 -- # return 0 00:34:05.004 16:11:09 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:05.004 16:11:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@65 -- # true 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@65 -- # count=0 00:34:05.262 16:11:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:05.262 16:11:09 -- bdev/bdev_raid.sh@106 -- # count=0 00:34:05.262 16:11:09 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:34:05.262 16:11:09 -- bdev/bdev_raid.sh@111 -- # killprocess 69405 00:34:05.262 16:11:09 -- common/autotest_common.sh@926 -- # '[' -z 69405 ']' 00:34:05.262 16:11:09 -- common/autotest_common.sh@930 -- # kill -0 69405 00:34:05.262 16:11:09 -- common/autotest_common.sh@931 -- # uname 00:34:05.262 16:11:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:05.262 16:11:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69405 00:34:05.262 killing process with pid 69405 00:34:05.262 16:11:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:05.262 16:11:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:05.262 16:11:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69405' 00:34:05.262 16:11:09 -- common/autotest_common.sh@945 -- # kill 69405 00:34:05.262 16:11:09 -- common/autotest_common.sh@950 -- # wait 69405 00:34:05.262 [2024-07-22 16:11:09.416240] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:05.262 [2024-07-22 16:11:09.416381] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:05.262 [2024-07-22 16:11:09.416456] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:05.262 [2024-07-22 16:11:09.416476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:34:05.520 [2024-07-22 16:11:09.603515] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:06.893 ************************************ 00:34:06.893 END TEST raid_function_test_raid0 00:34:06.893 ************************************ 00:34:06.893 16:11:10 -- bdev/bdev_raid.sh@113 -- # return 0 00:34:06.893 00:34:06.893 real 0m5.003s 00:34:06.893 user 0m6.266s 00:34:06.893 sys 0m1.139s 00:34:06.893 16:11:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:06.893 16:11:10 -- common/autotest_common.sh@10 -- # set +x 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:34:06.893 16:11:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:34:06.893 16:11:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:06.893 16:11:11 -- common/autotest_common.sh@10 -- # set +x 00:34:06.893 ************************************ 00:34:06.893 START TEST raid_function_test_concat 00:34:06.893 ************************************ 00:34:06.893 16:11:11 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:34:06.893 Process raid pid: 69562 00:34:06.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@86 -- # raid_pid=69562 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 69562' 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@88 -- # waitforlisten 69562 /var/tmp/spdk-raid.sock 00:34:06.893 16:11:11 -- common/autotest_common.sh@819 -- # '[' -z 69562 ']' 00:34:06.893 16:11:11 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:06.893 16:11:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:06.893 16:11:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:06.893 16:11:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:06.893 16:11:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:06.893 16:11:11 -- common/autotest_common.sh@10 -- # set +x 00:34:06.893 [2024-07-22 16:11:11.090166] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:06.893 [2024-07-22 16:11:11.090357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.151 [2024-07-22 16:11:11.276797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.409 [2024-07-22 16:11:11.599371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.691 [2024-07-22 16:11:11.830727] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:07.988 16:11:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:07.988 16:11:12 -- common/autotest_common.sh@852 -- # return 0 00:34:07.988 16:11:12 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:34:07.988 16:11:12 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:34:07.988 16:11:12 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:34:07.988 16:11:12 -- bdev/bdev_raid.sh@70 -- # cat 00:34:07.988 16:11:12 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:34:08.246 [2024-07-22 16:11:12.455180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:08.246 [2024-07-22 16:11:12.457706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:08.246 [2024-07-22 16:11:12.457795] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:34:08.246 [2024-07-22 16:11:12.457818] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:08.246 [2024-07-22 16:11:12.458001] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:34:08.246 [2024-07-22 16:11:12.458499] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:34:08.246 [2024-07-22 16:11:12.458518] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:34:08.246 [2024-07-22 16:11:12.458725] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:08.246 Base_1 00:34:08.246 Base_2 00:34:08.246 16:11:12 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:34:08.246 16:11:12 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:34:08.246 16:11:12 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:34:08.504 16:11:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:34:08.504 16:11:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:34:08.504 16:11:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@12 -- # local i 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:08.504 16:11:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:34:08.762 [2024-07-22 16:11:12.979415] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:34:08.762 /dev/nbd0 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:08.762 16:11:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:34:08.762 16:11:13 -- common/autotest_common.sh@857 -- # local i 00:34:08.762 16:11:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:34:08.762 16:11:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:34:08.762 16:11:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:34:08.762 16:11:13 -- common/autotest_common.sh@861 -- # break 00:34:08.762 16:11:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:34:08.762 16:11:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:34:08.762 16:11:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:08.762 1+0 records in 00:34:08.762 1+0 records out 00:34:08.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027434 s, 14.9 MB/s 00:34:08.762 16:11:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:08.762 16:11:13 -- common/autotest_common.sh@874 -- # size=4096 00:34:08.762 16:11:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:08.762 16:11:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:34:08.762 16:11:13 -- common/autotest_common.sh@877 -- # return 0 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:08.762 16:11:13 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:08.762 16:11:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:09.325 { 00:34:09.325 "nbd_device": "/dev/nbd0", 00:34:09.325 "bdev_name": "raid" 00:34:09.325 } 00:34:09.325 ]' 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:09.325 { 00:34:09.325 "nbd_device": "/dev/nbd0", 00:34:09.325 "bdev_name": "raid" 00:34:09.325 } 00:34:09.325 ]' 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@65 -- # count=1 00:34:09.325 16:11:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@98 -- # count=1 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@20 -- # local blksize 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:34:09.325 4096+0 records in 00:34:09.325 4096+0 records out 00:34:09.325 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0206886 s, 101 MB/s 00:34:09.325 16:11:13 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:34:09.583 4096+0 records in 00:34:09.583 4096+0 records out 00:34:09.583 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.38677 s, 5.4 MB/s 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:34:09.583 128+0 records in 00:34:09.583 128+0 records out 00:34:09.583 65536 bytes (66 kB, 64 KiB) copied, 0.000357497 s, 183 MB/s 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:34:09.583 2035+0 records in 00:34:09.583 2035+0 records out 00:34:09.583 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00589538 s, 177 MB/s 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:34:09.583 456+0 records in 00:34:09.583 456+0 records out 00:34:09.583 233472 bytes (233 kB, 228 KiB) copied, 0.00167858 s, 139 MB/s 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:34:09.583 16:11:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:09.583 16:11:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:09.583 16:11:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:09.839 16:11:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:09.839 16:11:13 -- bdev/nbd_common.sh@51 -- # local i 00:34:09.839 16:11:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:09.839 16:11:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:10.096 [2024-07-22 16:11:14.159029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@41 -- # break 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@45 -- # return 0 00:34:10.096 16:11:14 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:10.096 16:11:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@65 -- # true 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@65 -- # count=0 00:34:10.354 16:11:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:34:10.354 16:11:14 -- bdev/bdev_raid.sh@106 -- # count=0 00:34:10.354 16:11:14 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:34:10.354 16:11:14 -- bdev/bdev_raid.sh@111 -- # killprocess 69562 00:34:10.354 16:11:14 -- common/autotest_common.sh@926 -- # '[' -z 69562 ']' 00:34:10.354 16:11:14 -- common/autotest_common.sh@930 -- # kill -0 69562 00:34:10.354 16:11:14 -- common/autotest_common.sh@931 -- # uname 00:34:10.354 16:11:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:10.354 16:11:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69562 00:34:10.354 16:11:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:10.354 16:11:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:10.354 killing process with pid 69562 00:34:10.354 16:11:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69562' 00:34:10.354 16:11:14 -- common/autotest_common.sh@945 -- # kill 69562 00:34:10.354 16:11:14 -- common/autotest_common.sh@950 -- # wait 69562 00:34:10.354 [2024-07-22 16:11:14.498675] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:10.354 [2024-07-22 16:11:14.498792] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:10.354 [2024-07-22 16:11:14.498860] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:10.354 [2024-07-22 16:11:14.498886] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:34:10.618 [2024-07-22 16:11:14.697827] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.995 ************************************ 00:34:11.995 END TEST raid_function_test_concat 00:34:11.995 ************************************ 00:34:11.995 16:11:15 -- bdev/bdev_raid.sh@113 -- # return 0 00:34:11.995 00:34:11.995 real 0m4.978s 00:34:11.995 user 0m6.224s 00:34:11.995 sys 0m1.142s 00:34:11.995 16:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:11.995 16:11:15 -- common/autotest_common.sh@10 -- # set +x 00:34:11.995 16:11:16 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:34:11.995 16:11:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:11.995 16:11:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:11.995 16:11:16 -- common/autotest_common.sh@10 -- # set +x 00:34:11.996 ************************************ 00:34:11.996 START TEST raid0_resize_test 00:34:11.996 ************************************ 00:34:11.996 16:11:16 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:34:11.996 Process raid pid: 69712 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@301 -- # raid_pid=69712 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 69712' 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@303 -- # waitforlisten 69712 /var/tmp/spdk-raid.sock 00:34:11.996 16:11:16 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:11.996 16:11:16 -- common/autotest_common.sh@819 -- # '[' -z 69712 ']' 00:34:11.996 16:11:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:11.996 16:11:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:11.996 16:11:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:11.996 16:11:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:11.996 16:11:16 -- common/autotest_common.sh@10 -- # set +x 00:34:11.996 [2024-07-22 16:11:16.121071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:11.996 [2024-07-22 16:11:16.121231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.254 [2024-07-22 16:11:16.293004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.512 [2024-07-22 16:11:16.603488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.770 [2024-07-22 16:11:16.820391] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:13.029 16:11:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:13.029 16:11:17 -- common/autotest_common.sh@852 -- # return 0 00:34:13.029 16:11:17 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:34:13.029 Base_1 00:34:13.288 16:11:17 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:34:13.288 Base_2 00:34:13.288 16:11:17 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:34:13.547 [2024-07-22 16:11:17.788158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:34:13.547 [2024-07-22 16:11:17.790553] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:34:13.547 [2024-07-22 16:11:17.790632] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:34:13.547 [2024-07-22 16:11:17.790653] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:13.547 [2024-07-22 16:11:17.790818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005450 00:34:13.547 [2024-07-22 16:11:17.791236] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:34:13.547 [2024-07-22 16:11:17.791254] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006f80 00:34:13.547 [2024-07-22 16:11:17.791455] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.547 16:11:17 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:34:13.805 [2024-07-22 16:11:18.060207] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:13.805 [2024-07-22 16:11:18.060477] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:34:13.805 true 00:34:14.085 16:11:18 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:34:14.085 16:11:18 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:34:14.085 [2024-07-22 16:11:18.324414] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:34:14.347 [2024-07-22 16:11:18.548290] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:34:14.347 [2024-07-22 16:11:18.548364] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:34:14.347 [2024-07-22 16:11:18.548426] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:34:14.347 [2024-07-22 16:11:18.548461] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:14.347 true 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:34:14.347 16:11:18 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:34:14.605 [2024-07-22 16:11:18.816511] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.605 16:11:18 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:34:14.605 16:11:18 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:34:14.605 16:11:18 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:34:14.605 16:11:18 -- bdev/bdev_raid.sh@332 -- # killprocess 69712 00:34:14.605 16:11:18 -- common/autotest_common.sh@926 -- # '[' -z 69712 ']' 00:34:14.605 16:11:18 -- common/autotest_common.sh@930 -- # kill -0 69712 00:34:14.605 16:11:18 -- common/autotest_common.sh@931 -- # uname 00:34:14.605 16:11:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:14.605 16:11:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69712 00:34:14.605 killing process with pid 69712 00:34:14.605 16:11:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:14.605 16:11:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:14.605 16:11:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69712' 00:34:14.605 16:11:18 -- common/autotest_common.sh@945 -- # kill 69712 00:34:14.605 [2024-07-22 16:11:18.874632] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:14.605 16:11:18 -- common/autotest_common.sh@950 -- # wait 69712 00:34:14.605 [2024-07-22 16:11:18.874741] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.605 [2024-07-22 16:11:18.874802] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.605 [2024-07-22 16:11:18.874826] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Raid, state offline 00:34:14.863 [2024-07-22 16:11:18.875538] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@334 -- # return 0 00:34:16.237 00:34:16.237 real 0m4.130s 00:34:16.237 user 0m5.695s 00:34:16.237 sys 0m0.623s 00:34:16.237 16:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:16.237 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:34:16.237 ************************************ 00:34:16.237 END TEST raid0_resize_test 00:34:16.237 ************************************ 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:34:16.237 16:11:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:34:16.237 16:11:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:16.237 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:34:16.237 ************************************ 00:34:16.237 START TEST raid_state_function_test 00:34:16.237 ************************************ 00:34:16.237 16:11:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:16.237 Process raid pid: 69796 00:34:16.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=69796 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69796' 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69796 /var/tmp/spdk-raid.sock 00:34:16.237 16:11:20 -- common/autotest_common.sh@819 -- # '[' -z 69796 ']' 00:34:16.237 16:11:20 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:16.237 16:11:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:16.237 16:11:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:16.237 16:11:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:16.237 16:11:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:16.237 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:34:16.237 [2024-07-22 16:11:20.319669] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:16.237 [2024-07-22 16:11:20.320093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.237 [2024-07-22 16:11:20.497907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.803 [2024-07-22 16:11:20.783937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.803 [2024-07-22 16:11:21.001945] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:17.061 16:11:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:17.061 16:11:21 -- common/autotest_common.sh@852 -- # return 0 00:34:17.061 16:11:21 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:17.319 [2024-07-22 16:11:21.498899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:17.319 [2024-07-22 16:11:21.499180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:17.319 [2024-07-22 16:11:21.499352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:17.319 [2024-07-22 16:11:21.499486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.319 16:11:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.577 16:11:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:17.577 "name": "Existed_Raid", 00:34:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.577 "strip_size_kb": 64, 00:34:17.577 "state": "configuring", 00:34:17.577 "raid_level": "raid0", 00:34:17.577 "superblock": false, 00:34:17.577 "num_base_bdevs": 2, 00:34:17.577 "num_base_bdevs_discovered": 0, 00:34:17.577 "num_base_bdevs_operational": 2, 00:34:17.577 "base_bdevs_list": [ 00:34:17.577 { 00:34:17.577 "name": "BaseBdev1", 00:34:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.577 "is_configured": false, 00:34:17.577 "data_offset": 0, 00:34:17.577 "data_size": 0 00:34:17.577 }, 00:34:17.577 { 00:34:17.577 "name": "BaseBdev2", 00:34:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.577 "is_configured": false, 00:34:17.577 "data_offset": 0, 00:34:17.577 "data_size": 0 00:34:17.577 } 00:34:17.577 ] 00:34:17.577 }' 00:34:17.577 16:11:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:17.577 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:34:17.835 16:11:22 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:18.093 [2024-07-22 16:11:22.331020] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:18.093 [2024-07-22 16:11:22.331369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:34:18.093 16:11:22 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:18.350 [2024-07-22 16:11:22.559102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:18.350 [2024-07-22 16:11:22.559212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:18.350 [2024-07-22 16:11:22.559238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:18.350 [2024-07-22 16:11:22.559257] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:18.350 16:11:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:18.608 [2024-07-22 16:11:22.874751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:18.608 BaseBdev1 00:34:18.866 16:11:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:18.866 16:11:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:18.866 16:11:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:18.866 16:11:22 -- common/autotest_common.sh@889 -- # local i 00:34:18.866 16:11:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:18.866 16:11:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:18.866 16:11:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:19.123 16:11:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:19.123 [ 00:34:19.123 { 00:34:19.123 "name": "BaseBdev1", 00:34:19.123 "aliases": [ 00:34:19.123 "e002656d-0c9f-4cc2-979c-e611dcf9aea1" 00:34:19.123 ], 00:34:19.123 "product_name": "Malloc disk", 00:34:19.123 "block_size": 512, 00:34:19.123 "num_blocks": 65536, 00:34:19.123 "uuid": "e002656d-0c9f-4cc2-979c-e611dcf9aea1", 00:34:19.123 "assigned_rate_limits": { 00:34:19.123 "rw_ios_per_sec": 0, 00:34:19.123 "rw_mbytes_per_sec": 0, 00:34:19.123 "r_mbytes_per_sec": 0, 00:34:19.123 "w_mbytes_per_sec": 0 00:34:19.123 }, 00:34:19.123 "claimed": true, 00:34:19.123 "claim_type": "exclusive_write", 00:34:19.123 "zoned": false, 00:34:19.123 "supported_io_types": { 00:34:19.123 "read": true, 00:34:19.123 "write": true, 00:34:19.124 "unmap": true, 00:34:19.124 "write_zeroes": true, 00:34:19.124 "flush": true, 00:34:19.124 "reset": true, 00:34:19.124 "compare": false, 00:34:19.124 "compare_and_write": false, 00:34:19.124 "abort": true, 00:34:19.124 "nvme_admin": false, 00:34:19.124 "nvme_io": false 00:34:19.124 }, 00:34:19.124 "memory_domains": [ 00:34:19.124 { 00:34:19.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:19.124 "dma_device_type": 2 00:34:19.124 } 00:34:19.124 ], 00:34:19.124 "driver_specific": {} 00:34:19.124 } 00:34:19.124 ] 00:34:19.381 16:11:23 -- common/autotest_common.sh@895 -- # return 0 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.381 16:11:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:19.639 16:11:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:19.639 "name": "Existed_Raid", 00:34:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.639 "strip_size_kb": 64, 00:34:19.639 "state": "configuring", 00:34:19.639 "raid_level": "raid0", 00:34:19.639 "superblock": false, 00:34:19.639 "num_base_bdevs": 2, 00:34:19.639 "num_base_bdevs_discovered": 1, 00:34:19.639 "num_base_bdevs_operational": 2, 00:34:19.639 "base_bdevs_list": [ 00:34:19.639 { 00:34:19.639 "name": "BaseBdev1", 00:34:19.639 "uuid": "e002656d-0c9f-4cc2-979c-e611dcf9aea1", 00:34:19.639 "is_configured": true, 00:34:19.639 "data_offset": 0, 00:34:19.639 "data_size": 65536 00:34:19.639 }, 00:34:19.639 { 00:34:19.639 "name": "BaseBdev2", 00:34:19.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.639 "is_configured": false, 00:34:19.639 "data_offset": 0, 00:34:19.639 "data_size": 0 00:34:19.639 } 00:34:19.639 ] 00:34:19.639 }' 00:34:19.639 16:11:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:19.639 16:11:23 -- common/autotest_common.sh@10 -- # set +x 00:34:19.897 16:11:24 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:20.154 [2024-07-22 16:11:24.319259] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:20.154 [2024-07-22 16:11:24.319604] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:34:20.154 16:11:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:34:20.155 16:11:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:20.412 [2024-07-22 16:11:24.635398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:20.412 [2024-07-22 16:11:24.637934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:20.412 [2024-07-22 16:11:24.638016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:20.412 16:11:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:20.412 16:11:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:20.412 16:11:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.413 16:11:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:20.671 16:11:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:20.671 "name": "Existed_Raid", 00:34:20.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.671 "strip_size_kb": 64, 00:34:20.671 "state": "configuring", 00:34:20.671 "raid_level": "raid0", 00:34:20.671 "superblock": false, 00:34:20.671 "num_base_bdevs": 2, 00:34:20.671 "num_base_bdevs_discovered": 1, 00:34:20.671 "num_base_bdevs_operational": 2, 00:34:20.671 "base_bdevs_list": [ 00:34:20.671 { 00:34:20.671 "name": "BaseBdev1", 00:34:20.671 "uuid": "e002656d-0c9f-4cc2-979c-e611dcf9aea1", 00:34:20.671 "is_configured": true, 00:34:20.671 "data_offset": 0, 00:34:20.671 "data_size": 65536 00:34:20.671 }, 00:34:20.671 { 00:34:20.671 "name": "BaseBdev2", 00:34:20.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.671 "is_configured": false, 00:34:20.671 "data_offset": 0, 00:34:20.671 "data_size": 0 00:34:20.671 } 00:34:20.671 ] 00:34:20.671 }' 00:34:20.671 16:11:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:20.671 16:11:24 -- common/autotest_common.sh@10 -- # set +x 00:34:21.264 16:11:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:21.523 [2024-07-22 16:11:25.583017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:21.523 [2024-07-22 16:11:25.583095] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:34:21.523 [2024-07-22 16:11:25.583109] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:21.523 [2024-07-22 16:11:25.583259] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:34:21.523 [2024-07-22 16:11:25.583665] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:34:21.523 [2024-07-22 16:11:25.583695] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:34:21.523 [2024-07-22 16:11:25.584026] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:21.523 BaseBdev2 00:34:21.523 16:11:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:21.523 16:11:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:34:21.523 16:11:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:21.523 16:11:25 -- common/autotest_common.sh@889 -- # local i 00:34:21.523 16:11:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:21.523 16:11:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:21.523 16:11:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:21.781 16:11:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:22.039 [ 00:34:22.039 { 00:34:22.039 "name": "BaseBdev2", 00:34:22.039 "aliases": [ 00:34:22.039 "e2f8621d-8c89-43e4-b806-6120b517b006" 00:34:22.039 ], 00:34:22.039 "product_name": "Malloc disk", 00:34:22.039 "block_size": 512, 00:34:22.039 "num_blocks": 65536, 00:34:22.039 "uuid": "e2f8621d-8c89-43e4-b806-6120b517b006", 00:34:22.039 "assigned_rate_limits": { 00:34:22.039 "rw_ios_per_sec": 0, 00:34:22.039 "rw_mbytes_per_sec": 0, 00:34:22.039 "r_mbytes_per_sec": 0, 00:34:22.039 "w_mbytes_per_sec": 0 00:34:22.039 }, 00:34:22.039 "claimed": true, 00:34:22.039 "claim_type": "exclusive_write", 00:34:22.039 "zoned": false, 00:34:22.039 "supported_io_types": { 00:34:22.039 "read": true, 00:34:22.039 "write": true, 00:34:22.039 "unmap": true, 00:34:22.039 "write_zeroes": true, 00:34:22.039 "flush": true, 00:34:22.039 "reset": true, 00:34:22.039 "compare": false, 00:34:22.039 "compare_and_write": false, 00:34:22.039 "abort": true, 00:34:22.039 "nvme_admin": false, 00:34:22.039 "nvme_io": false 00:34:22.039 }, 00:34:22.039 "memory_domains": [ 00:34:22.039 { 00:34:22.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:22.039 "dma_device_type": 2 00:34:22.039 } 00:34:22.039 ], 00:34:22.039 "driver_specific": {} 00:34:22.039 } 00:34:22.039 ] 00:34:22.039 16:11:26 -- common/autotest_common.sh@895 -- # return 0 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.039 16:11:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.297 16:11:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:22.297 "name": "Existed_Raid", 00:34:22.297 "uuid": "657d165a-69b4-46b0-b622-13b3879def6c", 00:34:22.297 "strip_size_kb": 64, 00:34:22.297 "state": "online", 00:34:22.297 "raid_level": "raid0", 00:34:22.297 "superblock": false, 00:34:22.297 "num_base_bdevs": 2, 00:34:22.297 "num_base_bdevs_discovered": 2, 00:34:22.297 "num_base_bdevs_operational": 2, 00:34:22.297 "base_bdevs_list": [ 00:34:22.297 { 00:34:22.297 "name": "BaseBdev1", 00:34:22.297 "uuid": "e002656d-0c9f-4cc2-979c-e611dcf9aea1", 00:34:22.297 "is_configured": true, 00:34:22.297 "data_offset": 0, 00:34:22.297 "data_size": 65536 00:34:22.297 }, 00:34:22.297 { 00:34:22.297 "name": "BaseBdev2", 00:34:22.297 "uuid": "e2f8621d-8c89-43e4-b806-6120b517b006", 00:34:22.297 "is_configured": true, 00:34:22.297 "data_offset": 0, 00:34:22.297 "data_size": 65536 00:34:22.297 } 00:34:22.297 ] 00:34:22.297 }' 00:34:22.297 16:11:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:22.297 16:11:26 -- common/autotest_common.sh@10 -- # set +x 00:34:22.555 16:11:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:22.812 [2024-07-22 16:11:27.011563] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:22.812 [2024-07-22 16:11:27.011850] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:22.813 [2024-07-22 16:11:27.012049] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.070 16:11:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.327 16:11:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:23.327 "name": "Existed_Raid", 00:34:23.327 "uuid": "657d165a-69b4-46b0-b622-13b3879def6c", 00:34:23.327 "strip_size_kb": 64, 00:34:23.327 "state": "offline", 00:34:23.327 "raid_level": "raid0", 00:34:23.327 "superblock": false, 00:34:23.327 "num_base_bdevs": 2, 00:34:23.327 "num_base_bdevs_discovered": 1, 00:34:23.327 "num_base_bdevs_operational": 1, 00:34:23.327 "base_bdevs_list": [ 00:34:23.327 { 00:34:23.327 "name": null, 00:34:23.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.327 "is_configured": false, 00:34:23.327 "data_offset": 0, 00:34:23.327 "data_size": 65536 00:34:23.327 }, 00:34:23.327 { 00:34:23.327 "name": "BaseBdev2", 00:34:23.327 "uuid": "e2f8621d-8c89-43e4-b806-6120b517b006", 00:34:23.327 "is_configured": true, 00:34:23.328 "data_offset": 0, 00:34:23.328 "data_size": 65536 00:34:23.328 } 00:34:23.328 ] 00:34:23.328 }' 00:34:23.328 16:11:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:23.328 16:11:27 -- common/autotest_common.sh@10 -- # set +x 00:34:23.585 16:11:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:23.585 16:11:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:23.585 16:11:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.585 16:11:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:24.151 16:11:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:24.151 16:11:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:24.151 16:11:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:24.459 [2024-07-22 16:11:28.449774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:24.459 [2024-07-22 16:11:28.449890] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:34:24.459 16:11:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:24.459 16:11:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:24.459 16:11:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.460 16:11:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:24.716 16:11:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:24.717 16:11:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:24.717 16:11:28 -- bdev/bdev_raid.sh@287 -- # killprocess 69796 00:34:24.717 16:11:28 -- common/autotest_common.sh@926 -- # '[' -z 69796 ']' 00:34:24.717 16:11:28 -- common/autotest_common.sh@930 -- # kill -0 69796 00:34:24.717 16:11:28 -- common/autotest_common.sh@931 -- # uname 00:34:24.717 16:11:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:24.717 16:11:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69796 00:34:24.717 killing process with pid 69796 00:34:24.717 16:11:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:24.717 16:11:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:24.717 16:11:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69796' 00:34:24.717 16:11:28 -- common/autotest_common.sh@945 -- # kill 69796 00:34:24.717 16:11:28 -- common/autotest_common.sh@950 -- # wait 69796 00:34:24.717 [2024-07-22 16:11:28.912868] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:24.717 [2024-07-22 16:11:28.913073] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:26.087 ************************************ 00:34:26.087 END TEST raid_state_function_test 00:34:26.087 ************************************ 00:34:26.087 16:11:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:26.087 00:34:26.087 real 0m10.107s 00:34:26.087 user 0m16.277s 00:34:26.087 sys 0m1.539s 00:34:26.087 16:11:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:26.087 16:11:30 -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:34:26.345 16:11:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:34:26.345 16:11:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:26.345 16:11:30 -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 ************************************ 00:34:26.345 START TEST raid_state_function_test_sb 00:34:26.345 ************************************ 00:34:26.345 16:11:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=70095 00:34:26.345 Process raid pid: 70095 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70095' 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:26.345 16:11:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70095 /var/tmp/spdk-raid.sock 00:34:26.345 16:11:30 -- common/autotest_common.sh@819 -- # '[' -z 70095 ']' 00:34:26.345 16:11:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:26.345 16:11:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:26.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:26.345 16:11:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:26.345 16:11:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:26.345 16:11:30 -- common/autotest_common.sh@10 -- # set +x 00:34:26.345 [2024-07-22 16:11:30.471604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:26.345 [2024-07-22 16:11:30.471761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.603 [2024-07-22 16:11:30.640528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.860 [2024-07-22 16:11:30.935765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.118 [2024-07-22 16:11:31.153432] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:27.118 16:11:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:27.118 16:11:31 -- common/autotest_common.sh@852 -- # return 0 00:34:27.118 16:11:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:27.376 [2024-07-22 16:11:31.601001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:27.376 [2024-07-22 16:11:31.601092] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:27.376 [2024-07-22 16:11:31.601112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:27.376 [2024-07-22 16:11:31.601130] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.376 16:11:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.942 16:11:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:27.942 "name": "Existed_Raid", 00:34:27.942 "uuid": "840c3f69-075b-4d88-a39f-e041b37caede", 00:34:27.942 "strip_size_kb": 64, 00:34:27.942 "state": "configuring", 00:34:27.942 "raid_level": "raid0", 00:34:27.942 "superblock": true, 00:34:27.942 "num_base_bdevs": 2, 00:34:27.942 "num_base_bdevs_discovered": 0, 00:34:27.942 "num_base_bdevs_operational": 2, 00:34:27.942 "base_bdevs_list": [ 00:34:27.942 { 00:34:27.942 "name": "BaseBdev1", 00:34:27.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.942 "is_configured": false, 00:34:27.942 "data_offset": 0, 00:34:27.942 "data_size": 0 00:34:27.942 }, 00:34:27.942 { 00:34:27.942 "name": "BaseBdev2", 00:34:27.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:27.942 "is_configured": false, 00:34:27.942 "data_offset": 0, 00:34:27.942 "data_size": 0 00:34:27.942 } 00:34:27.942 ] 00:34:27.942 }' 00:34:27.942 16:11:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:27.942 16:11:31 -- common/autotest_common.sh@10 -- # set +x 00:34:28.206 16:11:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:28.478 [2024-07-22 16:11:32.528914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:28.478 [2024-07-22 16:11:32.529006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:34:28.478 16:11:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:28.736 [2024-07-22 16:11:32.757099] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:28.736 [2024-07-22 16:11:32.757220] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:28.736 [2024-07-22 16:11:32.757246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:28.736 [2024-07-22 16:11:32.757265] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:28.736 16:11:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:28.993 [2024-07-22 16:11:33.108740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:28.993 BaseBdev1 00:34:28.993 16:11:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:28.993 16:11:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:28.993 16:11:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:28.993 16:11:33 -- common/autotest_common.sh@889 -- # local i 00:34:28.993 16:11:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:28.993 16:11:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:28.993 16:11:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:29.251 16:11:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:29.509 [ 00:34:29.509 { 00:34:29.509 "name": "BaseBdev1", 00:34:29.509 "aliases": [ 00:34:29.509 "5417e73d-e53e-4379-8c65-83eb092d0bbb" 00:34:29.509 ], 00:34:29.509 "product_name": "Malloc disk", 00:34:29.509 "block_size": 512, 00:34:29.509 "num_blocks": 65536, 00:34:29.509 "uuid": "5417e73d-e53e-4379-8c65-83eb092d0bbb", 00:34:29.509 "assigned_rate_limits": { 00:34:29.509 "rw_ios_per_sec": 0, 00:34:29.509 "rw_mbytes_per_sec": 0, 00:34:29.509 "r_mbytes_per_sec": 0, 00:34:29.509 "w_mbytes_per_sec": 0 00:34:29.509 }, 00:34:29.509 "claimed": true, 00:34:29.509 "claim_type": "exclusive_write", 00:34:29.509 "zoned": false, 00:34:29.509 "supported_io_types": { 00:34:29.509 "read": true, 00:34:29.509 "write": true, 00:34:29.509 "unmap": true, 00:34:29.509 "write_zeroes": true, 00:34:29.509 "flush": true, 00:34:29.509 "reset": true, 00:34:29.509 "compare": false, 00:34:29.509 "compare_and_write": false, 00:34:29.509 "abort": true, 00:34:29.509 "nvme_admin": false, 00:34:29.509 "nvme_io": false 00:34:29.509 }, 00:34:29.509 "memory_domains": [ 00:34:29.509 { 00:34:29.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:29.509 "dma_device_type": 2 00:34:29.509 } 00:34:29.509 ], 00:34:29.509 "driver_specific": {} 00:34:29.509 } 00:34:29.509 ] 00:34:29.509 16:11:33 -- common/autotest_common.sh@895 -- # return 0 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:29.509 16:11:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.768 16:11:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:29.768 "name": "Existed_Raid", 00:34:29.768 "uuid": "fe07be1b-4a1d-4b22-bfa7-b70e35ad89d2", 00:34:29.768 "strip_size_kb": 64, 00:34:29.768 "state": "configuring", 00:34:29.768 "raid_level": "raid0", 00:34:29.768 "superblock": true, 00:34:29.768 "num_base_bdevs": 2, 00:34:29.768 "num_base_bdevs_discovered": 1, 00:34:29.768 "num_base_bdevs_operational": 2, 00:34:29.768 "base_bdevs_list": [ 00:34:29.768 { 00:34:29.768 "name": "BaseBdev1", 00:34:29.768 "uuid": "5417e73d-e53e-4379-8c65-83eb092d0bbb", 00:34:29.768 "is_configured": true, 00:34:29.768 "data_offset": 2048, 00:34:29.768 "data_size": 63488 00:34:29.768 }, 00:34:29.768 { 00:34:29.768 "name": "BaseBdev2", 00:34:29.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:29.768 "is_configured": false, 00:34:29.768 "data_offset": 0, 00:34:29.768 "data_size": 0 00:34:29.768 } 00:34:29.768 ] 00:34:29.768 }' 00:34:29.768 16:11:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:29.768 16:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:30.027 16:11:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:30.286 [2024-07-22 16:11:34.425268] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:30.286 [2024-07-22 16:11:34.425355] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:34:30.286 16:11:34 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:34:30.286 16:11:34 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:30.544 16:11:34 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:30.803 BaseBdev1 00:34:30.803 16:11:35 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:34:30.803 16:11:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:30.803 16:11:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:30.803 16:11:35 -- common/autotest_common.sh@889 -- # local i 00:34:30.803 16:11:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:30.803 16:11:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:30.803 16:11:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:31.061 16:11:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:31.319 [ 00:34:31.319 { 00:34:31.319 "name": "BaseBdev1", 00:34:31.319 "aliases": [ 00:34:31.319 "22db1caa-1069-4f0d-9e66-bff85522d38d" 00:34:31.319 ], 00:34:31.319 "product_name": "Malloc disk", 00:34:31.319 "block_size": 512, 00:34:31.319 "num_blocks": 65536, 00:34:31.319 "uuid": "22db1caa-1069-4f0d-9e66-bff85522d38d", 00:34:31.319 "assigned_rate_limits": { 00:34:31.319 "rw_ios_per_sec": 0, 00:34:31.319 "rw_mbytes_per_sec": 0, 00:34:31.319 "r_mbytes_per_sec": 0, 00:34:31.320 "w_mbytes_per_sec": 0 00:34:31.320 }, 00:34:31.320 "claimed": false, 00:34:31.320 "zoned": false, 00:34:31.320 "supported_io_types": { 00:34:31.320 "read": true, 00:34:31.320 "write": true, 00:34:31.320 "unmap": true, 00:34:31.320 "write_zeroes": true, 00:34:31.320 "flush": true, 00:34:31.320 "reset": true, 00:34:31.320 "compare": false, 00:34:31.320 "compare_and_write": false, 00:34:31.320 "abort": true, 00:34:31.320 "nvme_admin": false, 00:34:31.320 "nvme_io": false 00:34:31.320 }, 00:34:31.320 "memory_domains": [ 00:34:31.320 { 00:34:31.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:31.320 "dma_device_type": 2 00:34:31.320 } 00:34:31.320 ], 00:34:31.320 "driver_specific": {} 00:34:31.320 } 00:34:31.320 ] 00:34:31.320 16:11:35 -- common/autotest_common.sh@895 -- # return 0 00:34:31.320 16:11:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:31.578 [2024-07-22 16:11:35.750140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:31.578 [2024-07-22 16:11:35.752545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:31.578 [2024-07-22 16:11:35.752603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.578 16:11:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:31.837 16:11:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:31.837 "name": "Existed_Raid", 00:34:31.837 "uuid": "6a7d2eb2-f04b-414a-b205-fb5dd7791a4d", 00:34:31.837 "strip_size_kb": 64, 00:34:31.837 "state": "configuring", 00:34:31.837 "raid_level": "raid0", 00:34:31.837 "superblock": true, 00:34:31.837 "num_base_bdevs": 2, 00:34:31.837 "num_base_bdevs_discovered": 1, 00:34:31.837 "num_base_bdevs_operational": 2, 00:34:31.837 "base_bdevs_list": [ 00:34:31.837 { 00:34:31.837 "name": "BaseBdev1", 00:34:31.837 "uuid": "22db1caa-1069-4f0d-9e66-bff85522d38d", 00:34:31.837 "is_configured": true, 00:34:31.837 "data_offset": 2048, 00:34:31.837 "data_size": 63488 00:34:31.837 }, 00:34:31.837 { 00:34:31.837 "name": "BaseBdev2", 00:34:31.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:31.837 "is_configured": false, 00:34:31.837 "data_offset": 0, 00:34:31.837 "data_size": 0 00:34:31.837 } 00:34:31.837 ] 00:34:31.837 }' 00:34:31.837 16:11:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:31.837 16:11:36 -- common/autotest_common.sh@10 -- # set +x 00:34:32.103 16:11:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:32.362 [2024-07-22 16:11:36.569366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:32.362 [2024-07-22 16:11:36.569632] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:34:32.362 [2024-07-22 16:11:36.569652] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:32.362 [2024-07-22 16:11:36.569779] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:34:32.362 [2024-07-22 16:11:36.570187] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:34:32.362 [2024-07-22 16:11:36.570219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:34:32.362 [2024-07-22 16:11:36.570389] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.362 BaseBdev2 00:34:32.362 16:11:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:32.362 16:11:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:34:32.362 16:11:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:32.362 16:11:36 -- common/autotest_common.sh@889 -- # local i 00:34:32.362 16:11:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:32.362 16:11:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:32.362 16:11:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:32.623 16:11:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:32.882 [ 00:34:32.882 { 00:34:32.882 "name": "BaseBdev2", 00:34:32.882 "aliases": [ 00:34:32.882 "39332737-6d1c-4742-b434-7ce7292b7a5a" 00:34:32.882 ], 00:34:32.882 "product_name": "Malloc disk", 00:34:32.882 "block_size": 512, 00:34:32.882 "num_blocks": 65536, 00:34:32.882 "uuid": "39332737-6d1c-4742-b434-7ce7292b7a5a", 00:34:32.882 "assigned_rate_limits": { 00:34:32.882 "rw_ios_per_sec": 0, 00:34:32.882 "rw_mbytes_per_sec": 0, 00:34:32.882 "r_mbytes_per_sec": 0, 00:34:32.882 "w_mbytes_per_sec": 0 00:34:32.882 }, 00:34:32.882 "claimed": true, 00:34:32.882 "claim_type": "exclusive_write", 00:34:32.882 "zoned": false, 00:34:32.882 "supported_io_types": { 00:34:32.882 "read": true, 00:34:32.882 "write": true, 00:34:32.882 "unmap": true, 00:34:32.882 "write_zeroes": true, 00:34:32.882 "flush": true, 00:34:32.882 "reset": true, 00:34:32.882 "compare": false, 00:34:32.882 "compare_and_write": false, 00:34:32.882 "abort": true, 00:34:32.882 "nvme_admin": false, 00:34:32.882 "nvme_io": false 00:34:32.882 }, 00:34:32.882 "memory_domains": [ 00:34:32.882 { 00:34:32.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:32.882 "dma_device_type": 2 00:34:32.882 } 00:34:32.882 ], 00:34:32.882 "driver_specific": {} 00:34:32.882 } 00:34:32.882 ] 00:34:32.882 16:11:37 -- common/autotest_common.sh@895 -- # return 0 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.882 16:11:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.141 16:11:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:33.141 "name": "Existed_Raid", 00:34:33.141 "uuid": "6a7d2eb2-f04b-414a-b205-fb5dd7791a4d", 00:34:33.141 "strip_size_kb": 64, 00:34:33.141 "state": "online", 00:34:33.141 "raid_level": "raid0", 00:34:33.141 "superblock": true, 00:34:33.141 "num_base_bdevs": 2, 00:34:33.141 "num_base_bdevs_discovered": 2, 00:34:33.141 "num_base_bdevs_operational": 2, 00:34:33.141 "base_bdevs_list": [ 00:34:33.141 { 00:34:33.141 "name": "BaseBdev1", 00:34:33.141 "uuid": "22db1caa-1069-4f0d-9e66-bff85522d38d", 00:34:33.141 "is_configured": true, 00:34:33.141 "data_offset": 2048, 00:34:33.141 "data_size": 63488 00:34:33.141 }, 00:34:33.141 { 00:34:33.141 "name": "BaseBdev2", 00:34:33.141 "uuid": "39332737-6d1c-4742-b434-7ce7292b7a5a", 00:34:33.141 "is_configured": true, 00:34:33.141 "data_offset": 2048, 00:34:33.141 "data_size": 63488 00:34:33.141 } 00:34:33.141 ] 00:34:33.141 }' 00:34:33.141 16:11:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:33.141 16:11:37 -- common/autotest_common.sh@10 -- # set +x 00:34:33.400 16:11:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:33.658 [2024-07-22 16:11:37.865984] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:33.658 [2024-07-22 16:11:37.866396] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:33.658 [2024-07-22 16:11:37.866501] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.917 16:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:34.176 16:11:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:34.176 "name": "Existed_Raid", 00:34:34.176 "uuid": "6a7d2eb2-f04b-414a-b205-fb5dd7791a4d", 00:34:34.176 "strip_size_kb": 64, 00:34:34.176 "state": "offline", 00:34:34.176 "raid_level": "raid0", 00:34:34.176 "superblock": true, 00:34:34.176 "num_base_bdevs": 2, 00:34:34.176 "num_base_bdevs_discovered": 1, 00:34:34.176 "num_base_bdevs_operational": 1, 00:34:34.176 "base_bdevs_list": [ 00:34:34.176 { 00:34:34.176 "name": null, 00:34:34.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.176 "is_configured": false, 00:34:34.176 "data_offset": 2048, 00:34:34.176 "data_size": 63488 00:34:34.176 }, 00:34:34.176 { 00:34:34.176 "name": "BaseBdev2", 00:34:34.176 "uuid": "39332737-6d1c-4742-b434-7ce7292b7a5a", 00:34:34.176 "is_configured": true, 00:34:34.176 "data_offset": 2048, 00:34:34.176 "data_size": 63488 00:34:34.176 } 00:34:34.176 ] 00:34:34.176 }' 00:34:34.176 16:11:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:34.176 16:11:38 -- common/autotest_common.sh@10 -- # set +x 00:34:34.434 16:11:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:34.434 16:11:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:34.434 16:11:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.434 16:11:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:34.692 16:11:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:34.692 16:11:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:34.692 16:11:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:34.951 [2024-07-22 16:11:38.987291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:34.951 [2024-07-22 16:11:38.987373] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:34:34.951 16:11:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:34.951 16:11:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:34.951 16:11:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.951 16:11:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:35.209 16:11:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:35.209 16:11:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:35.209 16:11:39 -- bdev/bdev_raid.sh@287 -- # killprocess 70095 00:34:35.209 16:11:39 -- common/autotest_common.sh@926 -- # '[' -z 70095 ']' 00:34:35.209 16:11:39 -- common/autotest_common.sh@930 -- # kill -0 70095 00:34:35.209 16:11:39 -- common/autotest_common.sh@931 -- # uname 00:34:35.209 16:11:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:35.209 16:11:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70095 00:34:35.209 16:11:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:35.209 killing process with pid 70095 00:34:35.209 16:11:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:35.209 16:11:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70095' 00:34:35.209 16:11:39 -- common/autotest_common.sh@945 -- # kill 70095 00:34:35.209 [2024-07-22 16:11:39.358321] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:35.209 16:11:39 -- common/autotest_common.sh@950 -- # wait 70095 00:34:35.209 [2024-07-22 16:11:39.358449] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:36.584 00:34:36.584 real 0m10.249s 00:34:36.584 user 0m16.476s 00:34:36.584 sys 0m1.678s 00:34:36.584 ************************************ 00:34:36.584 END TEST raid_state_function_test_sb 00:34:36.584 ************************************ 00:34:36.584 16:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:36.584 16:11:40 -- common/autotest_common.sh@10 -- # set +x 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:34:36.584 16:11:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:34:36.584 16:11:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:36.584 16:11:40 -- common/autotest_common.sh@10 -- # set +x 00:34:36.584 ************************************ 00:34:36.584 START TEST raid_superblock_test 00:34:36.584 ************************************ 00:34:36.584 16:11:40 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:34:36.584 16:11:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=70401 00:34:36.585 16:11:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70401 /var/tmp/spdk-raid.sock 00:34:36.585 16:11:40 -- common/autotest_common.sh@819 -- # '[' -z 70401 ']' 00:34:36.585 16:11:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:36.585 16:11:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:36.585 16:11:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:36.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:36.585 16:11:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:36.585 16:11:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:36.585 16:11:40 -- common/autotest_common.sh@10 -- # set +x 00:34:36.585 [2024-07-22 16:11:40.779035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:36.585 [2024-07-22 16:11:40.779229] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70401 ] 00:34:36.843 [2024-07-22 16:11:40.978614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.103 [2024-07-22 16:11:41.242457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.361 [2024-07-22 16:11:41.463075] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:37.620 16:11:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:37.620 16:11:41 -- common/autotest_common.sh@852 -- # return 0 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:37.620 16:11:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:37.879 malloc1 00:34:37.879 16:11:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:38.138 [2024-07-22 16:11:42.258291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:38.138 [2024-07-22 16:11:42.258413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.138 [2024-07-22 16:11:42.258459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:34:38.138 [2024-07-22 16:11:42.258477] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.138 [2024-07-22 16:11:42.261572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.138 [2024-07-22 16:11:42.261616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:38.138 pt1 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:38.138 16:11:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:38.397 malloc2 00:34:38.397 16:11:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:38.656 [2024-07-22 16:11:42.777090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:38.656 [2024-07-22 16:11:42.777181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.656 [2024-07-22 16:11:42.777220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:34:38.656 [2024-07-22 16:11:42.777236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.656 [2024-07-22 16:11:42.780013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.656 [2024-07-22 16:11:42.780056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:38.656 pt2 00:34:38.656 16:11:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:34:38.656 16:11:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:34:38.656 16:11:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:34:38.915 [2024-07-22 16:11:43.049241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:38.915 [2024-07-22 16:11:43.051819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:38.915 [2024-07-22 16:11:43.052058] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:34:38.915 [2024-07-22 16:11:43.052080] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:38.915 [2024-07-22 16:11:43.052227] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:34:38.915 [2024-07-22 16:11:43.052650] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:34:38.915 [2024-07-22 16:11:43.052682] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:34:38.915 [2024-07-22 16:11:43.052852] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.915 16:11:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.173 16:11:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:39.173 "name": "raid_bdev1", 00:34:39.173 "uuid": "043efa0d-d6c9-42e2-b73c-938414a1cedc", 00:34:39.173 "strip_size_kb": 64, 00:34:39.173 "state": "online", 00:34:39.173 "raid_level": "raid0", 00:34:39.173 "superblock": true, 00:34:39.173 "num_base_bdevs": 2, 00:34:39.173 "num_base_bdevs_discovered": 2, 00:34:39.173 "num_base_bdevs_operational": 2, 00:34:39.173 "base_bdevs_list": [ 00:34:39.173 { 00:34:39.173 "name": "pt1", 00:34:39.173 "uuid": "efd786fc-6739-5662-b7d4-1d4ed5f57d11", 00:34:39.173 "is_configured": true, 00:34:39.173 "data_offset": 2048, 00:34:39.173 "data_size": 63488 00:34:39.173 }, 00:34:39.173 { 00:34:39.173 "name": "pt2", 00:34:39.173 "uuid": "7cd12901-88bb-5954-a2dc-9b0c18ec4a16", 00:34:39.173 "is_configured": true, 00:34:39.173 "data_offset": 2048, 00:34:39.173 "data_size": 63488 00:34:39.173 } 00:34:39.173 ] 00:34:39.173 }' 00:34:39.173 16:11:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:39.173 16:11:43 -- common/autotest_common.sh@10 -- # set +x 00:34:39.433 16:11:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:39.433 16:11:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:34:39.740 [2024-07-22 16:11:43.905709] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:39.740 16:11:43 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=043efa0d-d6c9-42e2-b73c-938414a1cedc 00:34:39.740 16:11:43 -- bdev/bdev_raid.sh@380 -- # '[' -z 043efa0d-d6c9-42e2-b73c-938414a1cedc ']' 00:34:39.740 16:11:43 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:39.998 [2024-07-22 16:11:44.137495] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:39.998 [2024-07-22 16:11:44.137557] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:39.998 [2024-07-22 16:11:44.137671] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:39.998 [2024-07-22 16:11:44.137738] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:39.998 [2024-07-22 16:11:44.137754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:34:39.998 16:11:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.998 16:11:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:34:40.256 16:11:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:34:40.256 16:11:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:34:40.256 16:11:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:40.256 16:11:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:40.514 16:11:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:34:40.514 16:11:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:40.773 16:11:44 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:40.773 16:11:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:41.031 16:11:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:34:41.032 16:11:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:34:41.032 16:11:45 -- common/autotest_common.sh@640 -- # local es=0 00:34:41.032 16:11:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:34:41.032 16:11:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.032 16:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:41.032 16:11:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.032 16:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:41.032 16:11:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.032 16:11:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:34:41.032 16:11:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.032 16:11:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:41.032 16:11:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:34:41.290 [2024-07-22 16:11:45.529869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:41.290 [2024-07-22 16:11:45.532362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:41.290 [2024-07-22 16:11:45.532530] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:34:41.290 [2024-07-22 16:11:45.532610] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:34:41.290 [2024-07-22 16:11:45.532637] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:41.290 [2024-07-22 16:11:45.532653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:34:41.290 request: 00:34:41.290 { 00:34:41.290 "name": "raid_bdev1", 00:34:41.290 "raid_level": "raid0", 00:34:41.290 "base_bdevs": [ 00:34:41.290 "malloc1", 00:34:41.290 "malloc2" 00:34:41.290 ], 00:34:41.290 "superblock": false, 00:34:41.290 "strip_size_kb": 64, 00:34:41.290 "method": "bdev_raid_create", 00:34:41.290 "req_id": 1 00:34:41.290 } 00:34:41.290 Got JSON-RPC error response 00:34:41.290 response: 00:34:41.290 { 00:34:41.290 "code": -17, 00:34:41.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:41.290 } 00:34:41.290 16:11:45 -- common/autotest_common.sh@643 -- # es=1 00:34:41.290 16:11:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:34:41.290 16:11:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:34:41.290 16:11:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:34:41.290 16:11:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:34:41.290 16:11:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.857 16:11:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:34:41.857 16:11:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:34:41.857 16:11:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:41.857 [2024-07-22 16:11:46.069894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:41.857 [2024-07-22 16:11:46.069998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:41.857 [2024-07-22 16:11:46.070038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:34:41.857 [2024-07-22 16:11:46.070054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:41.857 [2024-07-22 16:11:46.073002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:41.857 [2024-07-22 16:11:46.073050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:41.857 [2024-07-22 16:11:46.073166] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:34:41.857 [2024-07-22 16:11:46.073240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:41.857 pt1 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.857 16:11:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.423 16:11:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:42.423 "name": "raid_bdev1", 00:34:42.423 "uuid": "043efa0d-d6c9-42e2-b73c-938414a1cedc", 00:34:42.423 "strip_size_kb": 64, 00:34:42.423 "state": "configuring", 00:34:42.423 "raid_level": "raid0", 00:34:42.423 "superblock": true, 00:34:42.423 "num_base_bdevs": 2, 00:34:42.423 "num_base_bdevs_discovered": 1, 00:34:42.423 "num_base_bdevs_operational": 2, 00:34:42.423 "base_bdevs_list": [ 00:34:42.423 { 00:34:42.424 "name": "pt1", 00:34:42.424 "uuid": "efd786fc-6739-5662-b7d4-1d4ed5f57d11", 00:34:42.424 "is_configured": true, 00:34:42.424 "data_offset": 2048, 00:34:42.424 "data_size": 63488 00:34:42.424 }, 00:34:42.424 { 00:34:42.424 "name": null, 00:34:42.424 "uuid": "7cd12901-88bb-5954-a2dc-9b0c18ec4a16", 00:34:42.424 "is_configured": false, 00:34:42.424 "data_offset": 2048, 00:34:42.424 "data_size": 63488 00:34:42.424 } 00:34:42.424 ] 00:34:42.424 }' 00:34:42.424 16:11:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:42.424 16:11:46 -- common/autotest_common.sh@10 -- # set +x 00:34:42.681 16:11:46 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:34:42.681 16:11:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:34:42.681 16:11:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:42.681 16:11:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:42.939 [2024-07-22 16:11:47.002140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:42.939 [2024-07-22 16:11:47.002245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:42.939 [2024-07-22 16:11:47.002289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:34:42.939 [2024-07-22 16:11:47.002305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:42.939 [2024-07-22 16:11:47.002859] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:42.939 [2024-07-22 16:11:47.002885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:42.939 [2024-07-22 16:11:47.003010] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:34:42.939 [2024-07-22 16:11:47.003047] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:42.939 [2024-07-22 16:11:47.003204] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:34:42.939 [2024-07-22 16:11:47.003220] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:42.939 [2024-07-22 16:11:47.003350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:34:42.939 [2024-07-22 16:11:47.003721] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:34:42.939 [2024-07-22 16:11:47.003742] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:34:42.939 [2024-07-22 16:11:47.003888] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.939 pt2 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.939 16:11:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.234 16:11:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:43.234 "name": "raid_bdev1", 00:34:43.234 "uuid": "043efa0d-d6c9-42e2-b73c-938414a1cedc", 00:34:43.234 "strip_size_kb": 64, 00:34:43.234 "state": "online", 00:34:43.234 "raid_level": "raid0", 00:34:43.234 "superblock": true, 00:34:43.234 "num_base_bdevs": 2, 00:34:43.234 "num_base_bdevs_discovered": 2, 00:34:43.234 "num_base_bdevs_operational": 2, 00:34:43.234 "base_bdevs_list": [ 00:34:43.234 { 00:34:43.234 "name": "pt1", 00:34:43.234 "uuid": "efd786fc-6739-5662-b7d4-1d4ed5f57d11", 00:34:43.234 "is_configured": true, 00:34:43.234 "data_offset": 2048, 00:34:43.234 "data_size": 63488 00:34:43.234 }, 00:34:43.234 { 00:34:43.234 "name": "pt2", 00:34:43.234 "uuid": "7cd12901-88bb-5954-a2dc-9b0c18ec4a16", 00:34:43.234 "is_configured": true, 00:34:43.234 "data_offset": 2048, 00:34:43.234 "data_size": 63488 00:34:43.234 } 00:34:43.234 ] 00:34:43.234 }' 00:34:43.234 16:11:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:43.234 16:11:47 -- common/autotest_common.sh@10 -- # set +x 00:34:43.490 16:11:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:43.490 16:11:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:34:43.749 [2024-07-22 16:11:47.838620] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:43.749 16:11:47 -- bdev/bdev_raid.sh@430 -- # '[' 043efa0d-d6c9-42e2-b73c-938414a1cedc '!=' 043efa0d-d6c9-42e2-b73c-938414a1cedc ']' 00:34:43.749 16:11:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:34:43.749 16:11:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:43.749 16:11:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:43.749 16:11:47 -- bdev/bdev_raid.sh@511 -- # killprocess 70401 00:34:43.749 16:11:47 -- common/autotest_common.sh@926 -- # '[' -z 70401 ']' 00:34:43.749 16:11:47 -- common/autotest_common.sh@930 -- # kill -0 70401 00:34:43.749 16:11:47 -- common/autotest_common.sh@931 -- # uname 00:34:43.749 16:11:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:43.749 16:11:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70401 00:34:43.749 16:11:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:43.749 16:11:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:43.749 killing process with pid 70401 00:34:43.749 16:11:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70401' 00:34:43.749 16:11:47 -- common/autotest_common.sh@945 -- # kill 70401 00:34:43.749 16:11:47 -- common/autotest_common.sh@950 -- # wait 70401 00:34:43.749 [2024-07-22 16:11:47.898877] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:43.749 [2024-07-22 16:11:47.899010] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:43.749 [2024-07-22 16:11:47.899069] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:43.749 [2024-07-22 16:11:47.899090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:34:44.007 [2024-07-22 16:11:48.082664] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:34:45.379 00:34:45.379 real 0m8.629s 00:34:45.379 user 0m13.737s 00:34:45.379 sys 0m1.370s 00:34:45.379 16:11:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:45.379 ************************************ 00:34:45.379 END TEST raid_superblock_test 00:34:45.379 ************************************ 00:34:45.379 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:34:45.379 16:11:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:34:45.379 16:11:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:45.379 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:34:45.379 ************************************ 00:34:45.379 START TEST raid_state_function_test 00:34:45.379 ************************************ 00:34:45.379 16:11:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=70636 00:34:45.379 Process raid pid: 70636 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70636' 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70636 /var/tmp/spdk-raid.sock 00:34:45.379 16:11:49 -- common/autotest_common.sh@819 -- # '[' -z 70636 ']' 00:34:45.379 16:11:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:45.379 16:11:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:45.379 16:11:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:45.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:45.379 16:11:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:45.379 16:11:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:45.379 16:11:49 -- common/autotest_common.sh@10 -- # set +x 00:34:45.379 [2024-07-22 16:11:49.467771] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:45.379 [2024-07-22 16:11:49.468567] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.379 [2024-07-22 16:11:49.642435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.946 [2024-07-22 16:11:49.913071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.946 [2024-07-22 16:11:50.129344] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:46.205 16:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:46.205 16:11:50 -- common/autotest_common.sh@852 -- # return 0 00:34:46.205 16:11:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:46.463 [2024-07-22 16:11:50.681347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:46.463 [2024-07-22 16:11:50.681412] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:46.463 [2024-07-22 16:11:50.681429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:46.463 [2024-07-22 16:11:50.681445] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.463 16:11:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:46.721 16:11:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:46.721 "name": "Existed_Raid", 00:34:46.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.721 "strip_size_kb": 64, 00:34:46.721 "state": "configuring", 00:34:46.721 "raid_level": "concat", 00:34:46.721 "superblock": false, 00:34:46.721 "num_base_bdevs": 2, 00:34:46.721 "num_base_bdevs_discovered": 0, 00:34:46.721 "num_base_bdevs_operational": 2, 00:34:46.721 "base_bdevs_list": [ 00:34:46.721 { 00:34:46.721 "name": "BaseBdev1", 00:34:46.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.721 "is_configured": false, 00:34:46.721 "data_offset": 0, 00:34:46.721 "data_size": 0 00:34:46.721 }, 00:34:46.721 { 00:34:46.721 "name": "BaseBdev2", 00:34:46.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.721 "is_configured": false, 00:34:46.721 "data_offset": 0, 00:34:46.721 "data_size": 0 00:34:46.721 } 00:34:46.721 ] 00:34:46.721 }' 00:34:46.721 16:11:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:46.721 16:11:50 -- common/autotest_common.sh@10 -- # set +x 00:34:47.308 16:11:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:47.567 [2024-07-22 16:11:51.601459] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:47.567 [2024-07-22 16:11:51.601523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:34:47.567 16:11:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:47.825 [2024-07-22 16:11:51.881585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:47.825 [2024-07-22 16:11:51.881651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:47.825 [2024-07-22 16:11:51.881675] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:47.825 [2024-07-22 16:11:51.881693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:47.825 16:11:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:48.083 [2024-07-22 16:11:52.189960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:48.083 BaseBdev1 00:34:48.083 16:11:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:48.083 16:11:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:48.083 16:11:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:48.083 16:11:52 -- common/autotest_common.sh@889 -- # local i 00:34:48.083 16:11:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:48.083 16:11:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:48.083 16:11:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:48.341 16:11:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:48.600 [ 00:34:48.600 { 00:34:48.600 "name": "BaseBdev1", 00:34:48.600 "aliases": [ 00:34:48.600 "d7c1efc1-b280-4867-8824-bc8c743cfab4" 00:34:48.600 ], 00:34:48.600 "product_name": "Malloc disk", 00:34:48.600 "block_size": 512, 00:34:48.600 "num_blocks": 65536, 00:34:48.600 "uuid": "d7c1efc1-b280-4867-8824-bc8c743cfab4", 00:34:48.600 "assigned_rate_limits": { 00:34:48.600 "rw_ios_per_sec": 0, 00:34:48.600 "rw_mbytes_per_sec": 0, 00:34:48.600 "r_mbytes_per_sec": 0, 00:34:48.600 "w_mbytes_per_sec": 0 00:34:48.600 }, 00:34:48.600 "claimed": true, 00:34:48.600 "claim_type": "exclusive_write", 00:34:48.600 "zoned": false, 00:34:48.600 "supported_io_types": { 00:34:48.600 "read": true, 00:34:48.600 "write": true, 00:34:48.600 "unmap": true, 00:34:48.600 "write_zeroes": true, 00:34:48.600 "flush": true, 00:34:48.600 "reset": true, 00:34:48.600 "compare": false, 00:34:48.600 "compare_and_write": false, 00:34:48.600 "abort": true, 00:34:48.600 "nvme_admin": false, 00:34:48.600 "nvme_io": false 00:34:48.600 }, 00:34:48.600 "memory_domains": [ 00:34:48.600 { 00:34:48.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:48.600 "dma_device_type": 2 00:34:48.600 } 00:34:48.600 ], 00:34:48.600 "driver_specific": {} 00:34:48.600 } 00:34:48.600 ] 00:34:48.600 16:11:52 -- common/autotest_common.sh@895 -- # return 0 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.600 16:11:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:48.858 16:11:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:48.858 "name": "Existed_Raid", 00:34:48.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.858 "strip_size_kb": 64, 00:34:48.858 "state": "configuring", 00:34:48.858 "raid_level": "concat", 00:34:48.858 "superblock": false, 00:34:48.858 "num_base_bdevs": 2, 00:34:48.858 "num_base_bdevs_discovered": 1, 00:34:48.858 "num_base_bdevs_operational": 2, 00:34:48.858 "base_bdevs_list": [ 00:34:48.858 { 00:34:48.858 "name": "BaseBdev1", 00:34:48.858 "uuid": "d7c1efc1-b280-4867-8824-bc8c743cfab4", 00:34:48.858 "is_configured": true, 00:34:48.858 "data_offset": 0, 00:34:48.858 "data_size": 65536 00:34:48.858 }, 00:34:48.858 { 00:34:48.858 "name": "BaseBdev2", 00:34:48.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.858 "is_configured": false, 00:34:48.858 "data_offset": 0, 00:34:48.858 "data_size": 0 00:34:48.858 } 00:34:48.858 ] 00:34:48.858 }' 00:34:48.858 16:11:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:48.858 16:11:53 -- common/autotest_common.sh@10 -- # set +x 00:34:49.116 16:11:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:49.374 [2024-07-22 16:11:53.586419] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:49.374 [2024-07-22 16:11:53.586495] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:34:49.374 16:11:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:34:49.374 16:11:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:49.633 [2024-07-22 16:11:53.854736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:49.633 [2024-07-22 16:11:53.857751] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:49.633 [2024-07-22 16:11:53.857824] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.633 16:11:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:49.891 16:11:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:49.891 "name": "Existed_Raid", 00:34:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:49.891 "strip_size_kb": 64, 00:34:49.891 "state": "configuring", 00:34:49.891 "raid_level": "concat", 00:34:49.891 "superblock": false, 00:34:49.891 "num_base_bdevs": 2, 00:34:49.891 "num_base_bdevs_discovered": 1, 00:34:49.891 "num_base_bdevs_operational": 2, 00:34:49.891 "base_bdevs_list": [ 00:34:49.891 { 00:34:49.891 "name": "BaseBdev1", 00:34:49.891 "uuid": "d7c1efc1-b280-4867-8824-bc8c743cfab4", 00:34:49.891 "is_configured": true, 00:34:49.891 "data_offset": 0, 00:34:49.891 "data_size": 65536 00:34:49.891 }, 00:34:49.891 { 00:34:49.891 "name": "BaseBdev2", 00:34:49.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:49.891 "is_configured": false, 00:34:49.891 "data_offset": 0, 00:34:49.891 "data_size": 0 00:34:49.891 } 00:34:49.891 ] 00:34:49.891 }' 00:34:49.891 16:11:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:49.891 16:11:54 -- common/autotest_common.sh@10 -- # set +x 00:34:50.468 16:11:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:50.468 [2024-07-22 16:11:54.685922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:50.468 [2024-07-22 16:11:54.686023] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:34:50.468 [2024-07-22 16:11:54.686049] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:50.468 [2024-07-22 16:11:54.686200] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:34:50.468 [2024-07-22 16:11:54.686641] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:34:50.468 [2024-07-22 16:11:54.686675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:34:50.468 [2024-07-22 16:11:54.687013] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:50.468 BaseBdev2 00:34:50.468 16:11:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:34:50.468 16:11:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:34:50.468 16:11:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:50.468 16:11:54 -- common/autotest_common.sh@889 -- # local i 00:34:50.468 16:11:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:50.469 16:11:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:50.469 16:11:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:50.759 16:11:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:51.022 [ 00:34:51.022 { 00:34:51.022 "name": "BaseBdev2", 00:34:51.022 "aliases": [ 00:34:51.022 "897f0864-419c-4e4b-a3b0-8798139d758e" 00:34:51.022 ], 00:34:51.022 "product_name": "Malloc disk", 00:34:51.022 "block_size": 512, 00:34:51.022 "num_blocks": 65536, 00:34:51.022 "uuid": "897f0864-419c-4e4b-a3b0-8798139d758e", 00:34:51.022 "assigned_rate_limits": { 00:34:51.022 "rw_ios_per_sec": 0, 00:34:51.022 "rw_mbytes_per_sec": 0, 00:34:51.022 "r_mbytes_per_sec": 0, 00:34:51.022 "w_mbytes_per_sec": 0 00:34:51.022 }, 00:34:51.022 "claimed": true, 00:34:51.022 "claim_type": "exclusive_write", 00:34:51.022 "zoned": false, 00:34:51.022 "supported_io_types": { 00:34:51.022 "read": true, 00:34:51.022 "write": true, 00:34:51.022 "unmap": true, 00:34:51.022 "write_zeroes": true, 00:34:51.022 "flush": true, 00:34:51.022 "reset": true, 00:34:51.022 "compare": false, 00:34:51.022 "compare_and_write": false, 00:34:51.022 "abort": true, 00:34:51.022 "nvme_admin": false, 00:34:51.022 "nvme_io": false 00:34:51.022 }, 00:34:51.022 "memory_domains": [ 00:34:51.022 { 00:34:51.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.022 "dma_device_type": 2 00:34:51.022 } 00:34:51.022 ], 00:34:51.022 "driver_specific": {} 00:34:51.022 } 00:34:51.022 ] 00:34:51.022 16:11:55 -- common/autotest_common.sh@895 -- # return 0 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:51.022 16:11:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:51.281 16:11:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:51.281 "name": "Existed_Raid", 00:34:51.281 "uuid": "5620a20e-e544-4ea8-b6d4-11ce66610c5e", 00:34:51.281 "strip_size_kb": 64, 00:34:51.281 "state": "online", 00:34:51.281 "raid_level": "concat", 00:34:51.281 "superblock": false, 00:34:51.281 "num_base_bdevs": 2, 00:34:51.281 "num_base_bdevs_discovered": 2, 00:34:51.281 "num_base_bdevs_operational": 2, 00:34:51.281 "base_bdevs_list": [ 00:34:51.281 { 00:34:51.281 "name": "BaseBdev1", 00:34:51.281 "uuid": "d7c1efc1-b280-4867-8824-bc8c743cfab4", 00:34:51.281 "is_configured": true, 00:34:51.281 "data_offset": 0, 00:34:51.281 "data_size": 65536 00:34:51.281 }, 00:34:51.281 { 00:34:51.281 "name": "BaseBdev2", 00:34:51.281 "uuid": "897f0864-419c-4e4b-a3b0-8798139d758e", 00:34:51.281 "is_configured": true, 00:34:51.281 "data_offset": 0, 00:34:51.281 "data_size": 65536 00:34:51.281 } 00:34:51.281 ] 00:34:51.281 }' 00:34:51.281 16:11:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:51.281 16:11:55 -- common/autotest_common.sh@10 -- # set +x 00:34:51.848 16:11:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:52.107 [2024-07-22 16:11:56.134689] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:52.107 [2024-07-22 16:11:56.134788] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:52.107 [2024-07-22 16:11:56.134865] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.107 16:11:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:52.365 16:11:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:52.365 "name": "Existed_Raid", 00:34:52.365 "uuid": "5620a20e-e544-4ea8-b6d4-11ce66610c5e", 00:34:52.365 "strip_size_kb": 64, 00:34:52.365 "state": "offline", 00:34:52.365 "raid_level": "concat", 00:34:52.365 "superblock": false, 00:34:52.365 "num_base_bdevs": 2, 00:34:52.365 "num_base_bdevs_discovered": 1, 00:34:52.365 "num_base_bdevs_operational": 1, 00:34:52.365 "base_bdevs_list": [ 00:34:52.365 { 00:34:52.365 "name": null, 00:34:52.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.365 "is_configured": false, 00:34:52.365 "data_offset": 0, 00:34:52.365 "data_size": 65536 00:34:52.365 }, 00:34:52.365 { 00:34:52.365 "name": "BaseBdev2", 00:34:52.365 "uuid": "897f0864-419c-4e4b-a3b0-8798139d758e", 00:34:52.365 "is_configured": true, 00:34:52.365 "data_offset": 0, 00:34:52.365 "data_size": 65536 00:34:52.365 } 00:34:52.365 ] 00:34:52.365 }' 00:34:52.365 16:11:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:52.365 16:11:56 -- common/autotest_common.sh@10 -- # set +x 00:34:52.623 16:11:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:34:52.623 16:11:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:52.623 16:11:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:52.623 16:11:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:34:52.882 16:11:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:34:52.882 16:11:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:52.882 16:11:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:53.139 [2024-07-22 16:11:57.335705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:53.139 [2024-07-22 16:11:57.335865] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:34:53.397 16:11:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:34:53.397 16:11:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:34:53.397 16:11:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.397 16:11:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:34:53.655 16:11:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:34:53.655 16:11:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:34:53.655 16:11:57 -- bdev/bdev_raid.sh@287 -- # killprocess 70636 00:34:53.655 16:11:57 -- common/autotest_common.sh@926 -- # '[' -z 70636 ']' 00:34:53.655 16:11:57 -- common/autotest_common.sh@930 -- # kill -0 70636 00:34:53.655 16:11:57 -- common/autotest_common.sh@931 -- # uname 00:34:53.655 16:11:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:53.655 16:11:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70636 00:34:53.655 killing process with pid 70636 00:34:53.655 16:11:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:53.655 16:11:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:53.655 16:11:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70636' 00:34:53.655 16:11:57 -- common/autotest_common.sh@945 -- # kill 70636 00:34:53.655 16:11:57 -- common/autotest_common.sh@950 -- # wait 70636 00:34:53.655 [2024-07-22 16:11:57.727331] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:53.655 [2024-07-22 16:11:57.727537] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:55.032 ************************************ 00:34:55.032 END TEST raid_state_function_test 00:34:55.032 ************************************ 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:34:55.032 00:34:55.032 real 0m9.724s 00:34:55.032 user 0m15.600s 00:34:55.032 sys 0m1.561s 00:34:55.032 16:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.032 16:11:59 -- common/autotest_common.sh@10 -- # set +x 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:34:55.032 16:11:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:34:55.032 16:11:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:55.032 16:11:59 -- common/autotest_common.sh@10 -- # set +x 00:34:55.032 ************************************ 00:34:55.032 START TEST raid_state_function_test_sb 00:34:55.032 ************************************ 00:34:55.032 16:11:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=70933 00:34:55.032 Process raid pid: 70933 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70933' 00:34:55.032 16:11:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70933 /var/tmp/spdk-raid.sock 00:34:55.032 16:11:59 -- common/autotest_common.sh@819 -- # '[' -z 70933 ']' 00:34:55.032 16:11:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:55.032 16:11:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:55.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:55.032 16:11:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:55.032 16:11:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:55.032 16:11:59 -- common/autotest_common.sh@10 -- # set +x 00:34:55.032 [2024-07-22 16:11:59.267447] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:55.032 [2024-07-22 16:11:59.267623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.290 [2024-07-22 16:11:59.439383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.550 [2024-07-22 16:11:59.725957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.809 [2024-07-22 16:11:59.958228] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:56.067 16:12:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:56.067 16:12:00 -- common/autotest_common.sh@852 -- # return 0 00:34:56.067 16:12:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:56.325 [2024-07-22 16:12:00.412957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:56.325 [2024-07-22 16:12:00.413065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:56.325 [2024-07-22 16:12:00.413083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:56.325 [2024-07-22 16:12:00.413114] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.325 16:12:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:56.584 16:12:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:56.584 "name": "Existed_Raid", 00:34:56.584 "uuid": "ef9f8dfe-e53a-4e78-96ae-f002b38a0c0d", 00:34:56.584 "strip_size_kb": 64, 00:34:56.584 "state": "configuring", 00:34:56.584 "raid_level": "concat", 00:34:56.584 "superblock": true, 00:34:56.584 "num_base_bdevs": 2, 00:34:56.584 "num_base_bdevs_discovered": 0, 00:34:56.584 "num_base_bdevs_operational": 2, 00:34:56.584 "base_bdevs_list": [ 00:34:56.584 { 00:34:56.584 "name": "BaseBdev1", 00:34:56.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.584 "is_configured": false, 00:34:56.584 "data_offset": 0, 00:34:56.584 "data_size": 0 00:34:56.584 }, 00:34:56.584 { 00:34:56.584 "name": "BaseBdev2", 00:34:56.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.584 "is_configured": false, 00:34:56.584 "data_offset": 0, 00:34:56.584 "data_size": 0 00:34:56.584 } 00:34:56.584 ] 00:34:56.584 }' 00:34:56.584 16:12:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:56.584 16:12:00 -- common/autotest_common.sh@10 -- # set +x 00:34:56.842 16:12:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:57.100 [2024-07-22 16:12:01.260930] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:57.100 [2024-07-22 16:12:01.261363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:34:57.100 16:12:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:57.357 [2024-07-22 16:12:01.533338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:57.357 [2024-07-22 16:12:01.533458] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:57.357 [2024-07-22 16:12:01.533503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:57.357 [2024-07-22 16:12:01.533537] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:57.357 16:12:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:57.615 [2024-07-22 16:12:01.815261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:57.615 BaseBdev1 00:34:57.615 16:12:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:34:57.615 16:12:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:57.615 16:12:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:57.615 16:12:01 -- common/autotest_common.sh@889 -- # local i 00:34:57.615 16:12:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:57.615 16:12:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:57.615 16:12:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:57.873 16:12:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:58.132 [ 00:34:58.132 { 00:34:58.132 "name": "BaseBdev1", 00:34:58.132 "aliases": [ 00:34:58.132 "220249cc-4576-438a-b368-119568067e11" 00:34:58.132 ], 00:34:58.132 "product_name": "Malloc disk", 00:34:58.132 "block_size": 512, 00:34:58.132 "num_blocks": 65536, 00:34:58.132 "uuid": "220249cc-4576-438a-b368-119568067e11", 00:34:58.132 "assigned_rate_limits": { 00:34:58.132 "rw_ios_per_sec": 0, 00:34:58.132 "rw_mbytes_per_sec": 0, 00:34:58.132 "r_mbytes_per_sec": 0, 00:34:58.132 "w_mbytes_per_sec": 0 00:34:58.132 }, 00:34:58.132 "claimed": true, 00:34:58.132 "claim_type": "exclusive_write", 00:34:58.132 "zoned": false, 00:34:58.132 "supported_io_types": { 00:34:58.132 "read": true, 00:34:58.132 "write": true, 00:34:58.132 "unmap": true, 00:34:58.132 "write_zeroes": true, 00:34:58.132 "flush": true, 00:34:58.132 "reset": true, 00:34:58.132 "compare": false, 00:34:58.132 "compare_and_write": false, 00:34:58.132 "abort": true, 00:34:58.132 "nvme_admin": false, 00:34:58.132 "nvme_io": false 00:34:58.132 }, 00:34:58.132 "memory_domains": [ 00:34:58.132 { 00:34:58.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.132 "dma_device_type": 2 00:34:58.132 } 00:34:58.132 ], 00:34:58.132 "driver_specific": {} 00:34:58.132 } 00:34:58.132 ] 00:34:58.132 16:12:02 -- common/autotest_common.sh@895 -- # return 0 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.132 16:12:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.390 16:12:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:34:58.390 "name": "Existed_Raid", 00:34:58.390 "uuid": "8df19631-3668-4e25-a3c6-7a12eb2bab95", 00:34:58.390 "strip_size_kb": 64, 00:34:58.390 "state": "configuring", 00:34:58.390 "raid_level": "concat", 00:34:58.390 "superblock": true, 00:34:58.390 "num_base_bdevs": 2, 00:34:58.390 "num_base_bdevs_discovered": 1, 00:34:58.390 "num_base_bdevs_operational": 2, 00:34:58.390 "base_bdevs_list": [ 00:34:58.390 { 00:34:58.390 "name": "BaseBdev1", 00:34:58.390 "uuid": "220249cc-4576-438a-b368-119568067e11", 00:34:58.390 "is_configured": true, 00:34:58.390 "data_offset": 2048, 00:34:58.390 "data_size": 63488 00:34:58.390 }, 00:34:58.390 { 00:34:58.390 "name": "BaseBdev2", 00:34:58.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.390 "is_configured": false, 00:34:58.390 "data_offset": 0, 00:34:58.390 "data_size": 0 00:34:58.390 } 00:34:58.390 ] 00:34:58.390 }' 00:34:58.390 16:12:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:34:58.390 16:12:02 -- common/autotest_common.sh@10 -- # set +x 00:34:58.649 16:12:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:58.917 [2024-07-22 16:12:03.156258] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:58.917 [2024-07-22 16:12:03.156554] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:34:58.917 16:12:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:34:58.917 16:12:03 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:59.488 16:12:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:59.747 BaseBdev1 00:34:59.747 16:12:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:34:59.747 16:12:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:34:59.747 16:12:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:34:59.747 16:12:03 -- common/autotest_common.sh@889 -- # local i 00:34:59.747 16:12:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:34:59.747 16:12:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:34:59.747 16:12:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:00.005 16:12:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:00.005 [ 00:35:00.005 { 00:35:00.005 "name": "BaseBdev1", 00:35:00.005 "aliases": [ 00:35:00.005 "0369211c-23a7-4699-9093-1c55bf3972cc" 00:35:00.005 ], 00:35:00.005 "product_name": "Malloc disk", 00:35:00.005 "block_size": 512, 00:35:00.005 "num_blocks": 65536, 00:35:00.005 "uuid": "0369211c-23a7-4699-9093-1c55bf3972cc", 00:35:00.005 "assigned_rate_limits": { 00:35:00.005 "rw_ios_per_sec": 0, 00:35:00.005 "rw_mbytes_per_sec": 0, 00:35:00.005 "r_mbytes_per_sec": 0, 00:35:00.005 "w_mbytes_per_sec": 0 00:35:00.005 }, 00:35:00.005 "claimed": false, 00:35:00.005 "zoned": false, 00:35:00.005 "supported_io_types": { 00:35:00.005 "read": true, 00:35:00.005 "write": true, 00:35:00.005 "unmap": true, 00:35:00.005 "write_zeroes": true, 00:35:00.005 "flush": true, 00:35:00.005 "reset": true, 00:35:00.005 "compare": false, 00:35:00.005 "compare_and_write": false, 00:35:00.005 "abort": true, 00:35:00.005 "nvme_admin": false, 00:35:00.005 "nvme_io": false 00:35:00.005 }, 00:35:00.005 "memory_domains": [ 00:35:00.005 { 00:35:00.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.005 "dma_device_type": 2 00:35:00.005 } 00:35:00.005 ], 00:35:00.005 "driver_specific": {} 00:35:00.005 } 00:35:00.005 ] 00:35:00.005 16:12:04 -- common/autotest_common.sh@895 -- # return 0 00:35:00.005 16:12:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:00.264 [2024-07-22 16:12:04.489245] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:00.264 [2024-07-22 16:12:04.491785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:00.264 [2024-07-22 16:12:04.491847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.264 16:12:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.522 16:12:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:00.522 "name": "Existed_Raid", 00:35:00.522 "uuid": "36cde2e0-ecb6-4f47-9be2-57b1200cead7", 00:35:00.522 "strip_size_kb": 64, 00:35:00.522 "state": "configuring", 00:35:00.522 "raid_level": "concat", 00:35:00.522 "superblock": true, 00:35:00.522 "num_base_bdevs": 2, 00:35:00.522 "num_base_bdevs_discovered": 1, 00:35:00.522 "num_base_bdevs_operational": 2, 00:35:00.522 "base_bdevs_list": [ 00:35:00.522 { 00:35:00.522 "name": "BaseBdev1", 00:35:00.522 "uuid": "0369211c-23a7-4699-9093-1c55bf3972cc", 00:35:00.522 "is_configured": true, 00:35:00.522 "data_offset": 2048, 00:35:00.522 "data_size": 63488 00:35:00.522 }, 00:35:00.522 { 00:35:00.522 "name": "BaseBdev2", 00:35:00.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.522 "is_configured": false, 00:35:00.522 "data_offset": 0, 00:35:00.522 "data_size": 0 00:35:00.522 } 00:35:00.522 ] 00:35:00.522 }' 00:35:00.522 16:12:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:00.522 16:12:04 -- common/autotest_common.sh@10 -- # set +x 00:35:01.093 16:12:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:01.374 [2024-07-22 16:12:05.374577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:01.374 [2024-07-22 16:12:05.375224] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:35:01.374 [2024-07-22 16:12:05.375380] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:01.374 [2024-07-22 16:12:05.375659] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:35:01.374 [2024-07-22 16:12:05.376206] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:35:01.374 BaseBdev2 00:35:01.374 [2024-07-22 16:12:05.376356] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:35:01.374 [2024-07-22 16:12:05.376558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.374 16:12:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:01.374 16:12:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:35:01.374 16:12:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:01.374 16:12:05 -- common/autotest_common.sh@889 -- # local i 00:35:01.374 16:12:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:01.374 16:12:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:01.374 16:12:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:01.633 16:12:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:01.633 [ 00:35:01.633 { 00:35:01.633 "name": "BaseBdev2", 00:35:01.633 "aliases": [ 00:35:01.633 "49c7c500-b2d6-4116-9dfc-b0a9b245b882" 00:35:01.633 ], 00:35:01.633 "product_name": "Malloc disk", 00:35:01.633 "block_size": 512, 00:35:01.633 "num_blocks": 65536, 00:35:01.633 "uuid": "49c7c500-b2d6-4116-9dfc-b0a9b245b882", 00:35:01.633 "assigned_rate_limits": { 00:35:01.633 "rw_ios_per_sec": 0, 00:35:01.633 "rw_mbytes_per_sec": 0, 00:35:01.633 "r_mbytes_per_sec": 0, 00:35:01.633 "w_mbytes_per_sec": 0 00:35:01.633 }, 00:35:01.633 "claimed": true, 00:35:01.633 "claim_type": "exclusive_write", 00:35:01.633 "zoned": false, 00:35:01.633 "supported_io_types": { 00:35:01.633 "read": true, 00:35:01.633 "write": true, 00:35:01.633 "unmap": true, 00:35:01.633 "write_zeroes": true, 00:35:01.633 "flush": true, 00:35:01.633 "reset": true, 00:35:01.633 "compare": false, 00:35:01.633 "compare_and_write": false, 00:35:01.633 "abort": true, 00:35:01.633 "nvme_admin": false, 00:35:01.633 "nvme_io": false 00:35:01.633 }, 00:35:01.633 "memory_domains": [ 00:35:01.633 { 00:35:01.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:01.633 "dma_device_type": 2 00:35:01.633 } 00:35:01.633 ], 00:35:01.633 "driver_specific": {} 00:35:01.633 } 00:35:01.633 ] 00:35:01.633 16:12:05 -- common/autotest_common.sh@895 -- # return 0 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.633 16:12:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.891 16:12:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:01.891 "name": "Existed_Raid", 00:35:01.891 "uuid": "36cde2e0-ecb6-4f47-9be2-57b1200cead7", 00:35:01.891 "strip_size_kb": 64, 00:35:01.891 "state": "online", 00:35:01.891 "raid_level": "concat", 00:35:01.891 "superblock": true, 00:35:01.891 "num_base_bdevs": 2, 00:35:01.891 "num_base_bdevs_discovered": 2, 00:35:01.891 "num_base_bdevs_operational": 2, 00:35:01.891 "base_bdevs_list": [ 00:35:01.891 { 00:35:01.891 "name": "BaseBdev1", 00:35:01.891 "uuid": "0369211c-23a7-4699-9093-1c55bf3972cc", 00:35:01.891 "is_configured": true, 00:35:01.891 "data_offset": 2048, 00:35:01.891 "data_size": 63488 00:35:01.891 }, 00:35:01.891 { 00:35:01.891 "name": "BaseBdev2", 00:35:01.891 "uuid": "49c7c500-b2d6-4116-9dfc-b0a9b245b882", 00:35:01.891 "is_configured": true, 00:35:01.891 "data_offset": 2048, 00:35:01.891 "data_size": 63488 00:35:01.891 } 00:35:01.891 ] 00:35:01.891 }' 00:35:01.891 16:12:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:01.891 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:35:02.466 16:12:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:02.466 [2024-07-22 16:12:06.699115] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:02.466 [2024-07-22 16:12:06.699167] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:02.466 [2024-07-22 16:12:06.699251] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.724 16:12:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.982 16:12:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:02.982 "name": "Existed_Raid", 00:35:02.982 "uuid": "36cde2e0-ecb6-4f47-9be2-57b1200cead7", 00:35:02.982 "strip_size_kb": 64, 00:35:02.982 "state": "offline", 00:35:02.982 "raid_level": "concat", 00:35:02.982 "superblock": true, 00:35:02.982 "num_base_bdevs": 2, 00:35:02.982 "num_base_bdevs_discovered": 1, 00:35:02.982 "num_base_bdevs_operational": 1, 00:35:02.982 "base_bdevs_list": [ 00:35:02.982 { 00:35:02.982 "name": null, 00:35:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.982 "is_configured": false, 00:35:02.982 "data_offset": 2048, 00:35:02.982 "data_size": 63488 00:35:02.982 }, 00:35:02.982 { 00:35:02.982 "name": "BaseBdev2", 00:35:02.982 "uuid": "49c7c500-b2d6-4116-9dfc-b0a9b245b882", 00:35:02.982 "is_configured": true, 00:35:02.982 "data_offset": 2048, 00:35:02.982 "data_size": 63488 00:35:02.982 } 00:35:02.982 ] 00:35:02.982 }' 00:35:02.982 16:12:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:02.982 16:12:07 -- common/autotest_common.sh@10 -- # set +x 00:35:03.239 16:12:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:03.239 16:12:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:03.239 16:12:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.239 16:12:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:03.496 16:12:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:03.496 16:12:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:03.496 16:12:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:03.754 [2024-07-22 16:12:07.878803] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:03.754 [2024-07-22 16:12:07.879187] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:35:03.754 16:12:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:03.754 16:12:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:03.754 16:12:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:03.754 16:12:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:04.011 16:12:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:04.011 16:12:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:04.011 16:12:08 -- bdev/bdev_raid.sh@287 -- # killprocess 70933 00:35:04.011 16:12:08 -- common/autotest_common.sh@926 -- # '[' -z 70933 ']' 00:35:04.011 16:12:08 -- common/autotest_common.sh@930 -- # kill -0 70933 00:35:04.011 16:12:08 -- common/autotest_common.sh@931 -- # uname 00:35:04.011 16:12:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:04.011 16:12:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70933 00:35:04.269 killing process with pid 70933 00:35:04.269 16:12:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:04.269 16:12:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:04.269 16:12:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70933' 00:35:04.269 16:12:08 -- common/autotest_common.sh@945 -- # kill 70933 00:35:04.269 16:12:08 -- common/autotest_common.sh@950 -- # wait 70933 00:35:04.269 [2024-07-22 16:12:08.291527] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:04.269 [2024-07-22 16:12:08.291662] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:05.681 ************************************ 00:35:05.681 END TEST raid_state_function_test_sb 00:35:05.681 ************************************ 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:05.681 00:35:05.681 real 0m10.388s 00:35:05.681 user 0m16.728s 00:35:05.681 sys 0m1.716s 00:35:05.681 16:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.681 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:35:05.681 16:12:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:35:05.681 16:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:05.681 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:05.681 ************************************ 00:35:05.681 START TEST raid_superblock_test 00:35:05.681 ************************************ 00:35:05.681 16:12:09 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=71245 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:05.681 16:12:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 71245 /var/tmp/spdk-raid.sock 00:35:05.681 16:12:09 -- common/autotest_common.sh@819 -- # '[' -z 71245 ']' 00:35:05.681 16:12:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:05.681 16:12:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:05.681 16:12:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:05.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:05.681 16:12:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:05.681 16:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:05.681 [2024-07-22 16:12:09.672421] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:05.681 [2024-07-22 16:12:09.672844] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71245 ] 00:35:05.681 [2024-07-22 16:12:09.841797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.941 [2024-07-22 16:12:10.104544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:06.198 [2024-07-22 16:12:10.339809] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:06.455 16:12:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:06.455 16:12:10 -- common/autotest_common.sh@852 -- # return 0 00:35:06.455 16:12:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:35:06.455 16:12:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:06.455 16:12:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:35:06.455 16:12:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:35:06.456 16:12:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:06.456 16:12:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:06.456 16:12:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:06.456 16:12:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:06.456 16:12:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:06.712 malloc1 00:35:06.712 16:12:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:06.971 [2024-07-22 16:12:11.098452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:06.971 [2024-07-22 16:12:11.098668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.971 [2024-07-22 16:12:11.098728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:35:06.971 [2024-07-22 16:12:11.098745] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.971 [2024-07-22 16:12:11.101763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.971 pt1 00:35:06.971 [2024-07-22 16:12:11.102009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:06.971 16:12:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:07.229 malloc2 00:35:07.229 16:12:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:07.488 [2024-07-22 16:12:11.668042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:07.488 [2024-07-22 16:12:11.668376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:07.488 [2024-07-22 16:12:11.668576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:35:07.488 [2024-07-22 16:12:11.668747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:07.488 [2024-07-22 16:12:11.671776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:07.488 [2024-07-22 16:12:11.672053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:07.488 pt2 00:35:07.488 16:12:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:07.488 16:12:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:07.488 16:12:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:35:07.747 [2024-07-22 16:12:11.896620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:07.747 [2024-07-22 16:12:11.899445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:07.747 [2024-07-22 16:12:11.899727] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:35:07.747 [2024-07-22 16:12:11.899745] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:07.747 [2024-07-22 16:12:11.899892] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:35:07.747 [2024-07-22 16:12:11.900609] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:35:07.747 [2024-07-22 16:12:11.900760] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:35:07.747 [2024-07-22 16:12:11.901145] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.747 16:12:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.006 16:12:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:08.006 "name": "raid_bdev1", 00:35:08.006 "uuid": "5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec", 00:35:08.006 "strip_size_kb": 64, 00:35:08.006 "state": "online", 00:35:08.006 "raid_level": "concat", 00:35:08.006 "superblock": true, 00:35:08.006 "num_base_bdevs": 2, 00:35:08.006 "num_base_bdevs_discovered": 2, 00:35:08.006 "num_base_bdevs_operational": 2, 00:35:08.006 "base_bdevs_list": [ 00:35:08.006 { 00:35:08.006 "name": "pt1", 00:35:08.006 "uuid": "586168aa-12b2-5e55-a230-74f4c4066dc5", 00:35:08.006 "is_configured": true, 00:35:08.006 "data_offset": 2048, 00:35:08.006 "data_size": 63488 00:35:08.006 }, 00:35:08.006 { 00:35:08.006 "name": "pt2", 00:35:08.006 "uuid": "b830375c-8d4b-5b4c-982d-23355bb2f289", 00:35:08.006 "is_configured": true, 00:35:08.006 "data_offset": 2048, 00:35:08.006 "data_size": 63488 00:35:08.006 } 00:35:08.006 ] 00:35:08.006 }' 00:35:08.006 16:12:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:08.006 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:35:08.265 16:12:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:08.265 16:12:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:35:08.523 [2024-07-22 16:12:12.709752] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:08.523 16:12:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec 00:35:08.523 16:12:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec ']' 00:35:08.523 16:12:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:08.814 [2024-07-22 16:12:12.981563] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:08.814 [2024-07-22 16:12:12.981607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:08.814 [2024-07-22 16:12:12.981712] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:08.814 [2024-07-22 16:12:12.981827] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:08.814 [2024-07-22 16:12:12.981843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:35:08.814 16:12:13 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.814 16:12:13 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:35:09.080 16:12:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:35:09.080 16:12:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:35:09.080 16:12:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:09.080 16:12:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:09.338 16:12:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:09.338 16:12:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:09.596 16:12:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:09.596 16:12:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:09.854 16:12:14 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:35:09.854 16:12:14 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:35:09.854 16:12:14 -- common/autotest_common.sh@640 -- # local es=0 00:35:09.854 16:12:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:35:09.854 16:12:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:09.854 16:12:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:09.854 16:12:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:09.854 16:12:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:09.854 16:12:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:09.854 16:12:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:09.854 16:12:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:09.854 16:12:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:09.854 16:12:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:35:10.112 [2024-07-22 16:12:14.305923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:10.112 [2024-07-22 16:12:14.308545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:10.112 [2024-07-22 16:12:14.308642] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:35:10.112 [2024-07-22 16:12:14.308721] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:35:10.112 [2024-07-22 16:12:14.308759] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:10.112 [2024-07-22 16:12:14.308772] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:35:10.112 request: 00:35:10.112 { 00:35:10.112 "name": "raid_bdev1", 00:35:10.112 "raid_level": "concat", 00:35:10.112 "base_bdevs": [ 00:35:10.112 "malloc1", 00:35:10.112 "malloc2" 00:35:10.112 ], 00:35:10.112 "superblock": false, 00:35:10.112 "strip_size_kb": 64, 00:35:10.112 "method": "bdev_raid_create", 00:35:10.112 "req_id": 1 00:35:10.112 } 00:35:10.112 Got JSON-RPC error response 00:35:10.112 response: 00:35:10.112 { 00:35:10.112 "code": -17, 00:35:10.112 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:10.112 } 00:35:10.112 16:12:14 -- common/autotest_common.sh@643 -- # es=1 00:35:10.112 16:12:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:35:10.113 16:12:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:35:10.113 16:12:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:35:10.113 16:12:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.113 16:12:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:35:10.371 16:12:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:35:10.371 16:12:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:35:10.371 16:12:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:10.628 [2024-07-22 16:12:14.878266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:10.628 [2024-07-22 16:12:14.878623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:10.628 [2024-07-22 16:12:14.878682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:35:10.628 [2024-07-22 16:12:14.878699] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:10.628 [2024-07-22 16:12:14.882070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:10.628 [2024-07-22 16:12:14.882114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:10.628 [2024-07-22 16:12:14.882230] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:35:10.628 [2024-07-22 16:12:14.882306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:10.628 pt1 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.886 16:12:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.886 16:12:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:10.886 "name": "raid_bdev1", 00:35:10.886 "uuid": "5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec", 00:35:10.886 "strip_size_kb": 64, 00:35:10.886 "state": "configuring", 00:35:10.886 "raid_level": "concat", 00:35:10.886 "superblock": true, 00:35:10.886 "num_base_bdevs": 2, 00:35:10.886 "num_base_bdevs_discovered": 1, 00:35:10.886 "num_base_bdevs_operational": 2, 00:35:10.886 "base_bdevs_list": [ 00:35:10.886 { 00:35:10.886 "name": "pt1", 00:35:10.886 "uuid": "586168aa-12b2-5e55-a230-74f4c4066dc5", 00:35:10.886 "is_configured": true, 00:35:10.886 "data_offset": 2048, 00:35:10.886 "data_size": 63488 00:35:10.886 }, 00:35:10.886 { 00:35:10.886 "name": null, 00:35:10.886 "uuid": "b830375c-8d4b-5b4c-982d-23355bb2f289", 00:35:10.886 "is_configured": false, 00:35:10.886 "data_offset": 2048, 00:35:10.886 "data_size": 63488 00:35:10.886 } 00:35:10.886 ] 00:35:10.886 }' 00:35:10.886 16:12:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:10.886 16:12:15 -- common/autotest_common.sh@10 -- # set +x 00:35:11.452 16:12:15 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:35:11.452 16:12:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:35:11.452 16:12:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:11.452 16:12:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:11.452 [2024-07-22 16:12:15.722678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:11.452 [2024-07-22 16:12:15.722809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:11.452 [2024-07-22 16:12:15.722868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:35:11.452 [2024-07-22 16:12:15.722884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:11.710 [2024-07-22 16:12:15.723554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:11.710 [2024-07-22 16:12:15.723598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:11.710 [2024-07-22 16:12:15.723710] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:11.710 [2024-07-22 16:12:15.723740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:11.710 [2024-07-22 16:12:15.723885] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:35:11.710 [2024-07-22 16:12:15.723900] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:35:11.710 [2024-07-22 16:12:15.724050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:35:11.710 [2024-07-22 16:12:15.724495] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:35:11.710 [2024-07-22 16:12:15.724533] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:35:11.710 [2024-07-22 16:12:15.724712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:11.710 pt2 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.710 16:12:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.968 16:12:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:11.968 "name": "raid_bdev1", 00:35:11.968 "uuid": "5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec", 00:35:11.968 "strip_size_kb": 64, 00:35:11.968 "state": "online", 00:35:11.968 "raid_level": "concat", 00:35:11.968 "superblock": true, 00:35:11.968 "num_base_bdevs": 2, 00:35:11.968 "num_base_bdevs_discovered": 2, 00:35:11.968 "num_base_bdevs_operational": 2, 00:35:11.968 "base_bdevs_list": [ 00:35:11.968 { 00:35:11.968 "name": "pt1", 00:35:11.968 "uuid": "586168aa-12b2-5e55-a230-74f4c4066dc5", 00:35:11.968 "is_configured": true, 00:35:11.968 "data_offset": 2048, 00:35:11.968 "data_size": 63488 00:35:11.968 }, 00:35:11.968 { 00:35:11.968 "name": "pt2", 00:35:11.968 "uuid": "b830375c-8d4b-5b4c-982d-23355bb2f289", 00:35:11.968 "is_configured": true, 00:35:11.968 "data_offset": 2048, 00:35:11.968 "data_size": 63488 00:35:11.968 } 00:35:11.968 ] 00:35:11.968 }' 00:35:11.968 16:12:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:11.968 16:12:16 -- common/autotest_common.sh@10 -- # set +x 00:35:12.226 16:12:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:12.226 16:12:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:35:12.484 [2024-07-22 16:12:16.599507] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:12.484 16:12:16 -- bdev/bdev_raid.sh@430 -- # '[' 5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec '!=' 5e8bb3ba-8b2e-4531-b28d-95cffdfe39ec ']' 00:35:12.484 16:12:16 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:35:12.484 16:12:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:12.484 16:12:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:12.484 16:12:16 -- bdev/bdev_raid.sh@511 -- # killprocess 71245 00:35:12.484 16:12:16 -- common/autotest_common.sh@926 -- # '[' -z 71245 ']' 00:35:12.484 16:12:16 -- common/autotest_common.sh@930 -- # kill -0 71245 00:35:12.484 16:12:16 -- common/autotest_common.sh@931 -- # uname 00:35:12.484 16:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:12.484 16:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71245 00:35:12.484 killing process with pid 71245 00:35:12.484 16:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:12.484 16:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:12.484 16:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71245' 00:35:12.484 16:12:16 -- common/autotest_common.sh@945 -- # kill 71245 00:35:12.484 [2024-07-22 16:12:16.654456] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:12.484 16:12:16 -- common/autotest_common.sh@950 -- # wait 71245 00:35:12.484 [2024-07-22 16:12:16.654567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:12.484 [2024-07-22 16:12:16.654628] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:12.484 [2024-07-22 16:12:16.654651] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:35:12.741 [2024-07-22 16:12:16.850994] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@513 -- # return 0 00:35:14.117 00:35:14.117 real 0m8.605s 00:35:14.117 user 0m13.488s 00:35:14.117 sys 0m1.444s 00:35:14.117 16:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:14.117 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:35:14.117 ************************************ 00:35:14.117 END TEST raid_superblock_test 00:35:14.117 ************************************ 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:35:14.117 16:12:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:35:14.117 16:12:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:14.117 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:35:14.117 ************************************ 00:35:14.117 START TEST raid_state_function_test 00:35:14.117 ************************************ 00:35:14.117 16:12:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:14.117 16:12:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:35:14.118 Process raid pid: 71474 00:35:14.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=71474 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71474' 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71474 /var/tmp/spdk-raid.sock 00:35:14.118 16:12:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:14.118 16:12:18 -- common/autotest_common.sh@819 -- # '[' -z 71474 ']' 00:35:14.118 16:12:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:14.118 16:12:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:14.118 16:12:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:14.118 16:12:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:14.118 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:35:14.118 [2024-07-22 16:12:18.348268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:14.118 [2024-07-22 16:12:18.348753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.375 [2024-07-22 16:12:18.525157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.632 [2024-07-22 16:12:18.818010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:14.888 [2024-07-22 16:12:19.042531] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:15.145 16:12:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:15.145 16:12:19 -- common/autotest_common.sh@852 -- # return 0 00:35:15.145 16:12:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:15.403 [2024-07-22 16:12:19.504122] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:15.403 [2024-07-22 16:12:19.504206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:15.403 [2024-07-22 16:12:19.504225] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:15.403 [2024-07-22 16:12:19.504242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.403 16:12:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:15.661 16:12:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:15.661 "name": "Existed_Raid", 00:35:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.661 "strip_size_kb": 0, 00:35:15.661 "state": "configuring", 00:35:15.661 "raid_level": "raid1", 00:35:15.661 "superblock": false, 00:35:15.661 "num_base_bdevs": 2, 00:35:15.661 "num_base_bdevs_discovered": 0, 00:35:15.661 "num_base_bdevs_operational": 2, 00:35:15.661 "base_bdevs_list": [ 00:35:15.661 { 00:35:15.661 "name": "BaseBdev1", 00:35:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.661 "is_configured": false, 00:35:15.661 "data_offset": 0, 00:35:15.661 "data_size": 0 00:35:15.661 }, 00:35:15.661 { 00:35:15.661 "name": "BaseBdev2", 00:35:15.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.661 "is_configured": false, 00:35:15.661 "data_offset": 0, 00:35:15.661 "data_size": 0 00:35:15.661 } 00:35:15.661 ] 00:35:15.661 }' 00:35:15.661 16:12:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:15.661 16:12:19 -- common/autotest_common.sh@10 -- # set +x 00:35:15.919 16:12:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:16.177 [2024-07-22 16:12:20.364439] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:16.177 [2024-07-22 16:12:20.364776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:35:16.177 16:12:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:16.435 [2024-07-22 16:12:20.624556] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:16.435 [2024-07-22 16:12:20.624946] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:16.435 [2024-07-22 16:12:20.625115] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:16.435 [2024-07-22 16:12:20.625154] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:16.435 16:12:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:16.693 [2024-07-22 16:12:20.915577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:16.693 BaseBdev1 00:35:16.693 16:12:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:16.693 16:12:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:35:16.693 16:12:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:16.693 16:12:20 -- common/autotest_common.sh@889 -- # local i 00:35:16.693 16:12:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:16.693 16:12:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:16.693 16:12:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:16.950 16:12:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:17.208 [ 00:35:17.208 { 00:35:17.208 "name": "BaseBdev1", 00:35:17.208 "aliases": [ 00:35:17.208 "be7295a5-0574-4311-8937-4c5baa212dff" 00:35:17.208 ], 00:35:17.208 "product_name": "Malloc disk", 00:35:17.208 "block_size": 512, 00:35:17.208 "num_blocks": 65536, 00:35:17.208 "uuid": "be7295a5-0574-4311-8937-4c5baa212dff", 00:35:17.208 "assigned_rate_limits": { 00:35:17.208 "rw_ios_per_sec": 0, 00:35:17.208 "rw_mbytes_per_sec": 0, 00:35:17.208 "r_mbytes_per_sec": 0, 00:35:17.208 "w_mbytes_per_sec": 0 00:35:17.208 }, 00:35:17.208 "claimed": true, 00:35:17.208 "claim_type": "exclusive_write", 00:35:17.208 "zoned": false, 00:35:17.208 "supported_io_types": { 00:35:17.208 "read": true, 00:35:17.208 "write": true, 00:35:17.208 "unmap": true, 00:35:17.208 "write_zeroes": true, 00:35:17.208 "flush": true, 00:35:17.208 "reset": true, 00:35:17.208 "compare": false, 00:35:17.208 "compare_and_write": false, 00:35:17.208 "abort": true, 00:35:17.208 "nvme_admin": false, 00:35:17.208 "nvme_io": false 00:35:17.208 }, 00:35:17.208 "memory_domains": [ 00:35:17.208 { 00:35:17.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:17.208 "dma_device_type": 2 00:35:17.208 } 00:35:17.208 ], 00:35:17.208 "driver_specific": {} 00:35:17.208 } 00:35:17.208 ] 00:35:17.465 16:12:21 -- common/autotest_common.sh@895 -- # return 0 00:35:17.465 16:12:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:17.465 16:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:17.465 16:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:17.465 16:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:17.465 16:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:17.466 16:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:17.733 16:12:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:17.733 "name": "Existed_Raid", 00:35:17.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.733 "strip_size_kb": 0, 00:35:17.733 "state": "configuring", 00:35:17.733 "raid_level": "raid1", 00:35:17.733 "superblock": false, 00:35:17.733 "num_base_bdevs": 2, 00:35:17.733 "num_base_bdevs_discovered": 1, 00:35:17.733 "num_base_bdevs_operational": 2, 00:35:17.733 "base_bdevs_list": [ 00:35:17.733 { 00:35:17.733 "name": "BaseBdev1", 00:35:17.733 "uuid": "be7295a5-0574-4311-8937-4c5baa212dff", 00:35:17.733 "is_configured": true, 00:35:17.733 "data_offset": 0, 00:35:17.733 "data_size": 65536 00:35:17.733 }, 00:35:17.733 { 00:35:17.733 "name": "BaseBdev2", 00:35:17.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.733 "is_configured": false, 00:35:17.733 "data_offset": 0, 00:35:17.733 "data_size": 0 00:35:17.733 } 00:35:17.733 ] 00:35:17.733 }' 00:35:17.733 16:12:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:17.733 16:12:21 -- common/autotest_common.sh@10 -- # set +x 00:35:18.010 16:12:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:18.269 [2024-07-22 16:12:22.412471] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:18.269 [2024-07-22 16:12:22.412586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:35:18.269 16:12:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:35:18.269 16:12:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:18.526 [2024-07-22 16:12:22.628677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:18.526 [2024-07-22 16:12:22.631921] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:18.526 [2024-07-22 16:12:22.632042] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:18.526 16:12:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:18.794 16:12:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:18.794 "name": "Existed_Raid", 00:35:18.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:18.794 "strip_size_kb": 0, 00:35:18.794 "state": "configuring", 00:35:18.794 "raid_level": "raid1", 00:35:18.794 "superblock": false, 00:35:18.794 "num_base_bdevs": 2, 00:35:18.794 "num_base_bdevs_discovered": 1, 00:35:18.794 "num_base_bdevs_operational": 2, 00:35:18.794 "base_bdevs_list": [ 00:35:18.794 { 00:35:18.794 "name": "BaseBdev1", 00:35:18.794 "uuid": "be7295a5-0574-4311-8937-4c5baa212dff", 00:35:18.794 "is_configured": true, 00:35:18.794 "data_offset": 0, 00:35:18.794 "data_size": 65536 00:35:18.794 }, 00:35:18.794 { 00:35:18.794 "name": "BaseBdev2", 00:35:18.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:18.794 "is_configured": false, 00:35:18.794 "data_offset": 0, 00:35:18.794 "data_size": 0 00:35:18.794 } 00:35:18.794 ] 00:35:18.794 }' 00:35:18.794 16:12:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:18.794 16:12:22 -- common/autotest_common.sh@10 -- # set +x 00:35:19.052 16:12:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:19.310 [2024-07-22 16:12:23.487126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:19.310 [2024-07-22 16:12:23.487389] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:35:19.310 [2024-07-22 16:12:23.487447] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:19.310 [2024-07-22 16:12:23.487685] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:35:19.310 [2024-07-22 16:12:23.488264] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:35:19.310 [2024-07-22 16:12:23.488436] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:35:19.310 [2024-07-22 16:12:23.488921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:19.310 BaseBdev2 00:35:19.310 16:12:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:19.310 16:12:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:35:19.310 16:12:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:19.310 16:12:23 -- common/autotest_common.sh@889 -- # local i 00:35:19.310 16:12:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:19.310 16:12:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:19.310 16:12:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:19.569 16:12:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:19.828 [ 00:35:19.828 { 00:35:19.828 "name": "BaseBdev2", 00:35:19.828 "aliases": [ 00:35:19.828 "dff5c20c-0099-4c5a-8a60-d4bb300f581f" 00:35:19.828 ], 00:35:19.828 "product_name": "Malloc disk", 00:35:19.828 "block_size": 512, 00:35:19.828 "num_blocks": 65536, 00:35:19.828 "uuid": "dff5c20c-0099-4c5a-8a60-d4bb300f581f", 00:35:19.828 "assigned_rate_limits": { 00:35:19.828 "rw_ios_per_sec": 0, 00:35:19.828 "rw_mbytes_per_sec": 0, 00:35:19.828 "r_mbytes_per_sec": 0, 00:35:19.828 "w_mbytes_per_sec": 0 00:35:19.828 }, 00:35:19.828 "claimed": true, 00:35:19.828 "claim_type": "exclusive_write", 00:35:19.828 "zoned": false, 00:35:19.828 "supported_io_types": { 00:35:19.828 "read": true, 00:35:19.828 "write": true, 00:35:19.828 "unmap": true, 00:35:19.828 "write_zeroes": true, 00:35:19.828 "flush": true, 00:35:19.828 "reset": true, 00:35:19.828 "compare": false, 00:35:19.828 "compare_and_write": false, 00:35:19.828 "abort": true, 00:35:19.828 "nvme_admin": false, 00:35:19.828 "nvme_io": false 00:35:19.828 }, 00:35:19.828 "memory_domains": [ 00:35:19.828 { 00:35:19.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:19.828 "dma_device_type": 2 00:35:19.828 } 00:35:19.828 ], 00:35:19.828 "driver_specific": {} 00:35:19.828 } 00:35:19.828 ] 00:35:19.828 16:12:24 -- common/autotest_common.sh@895 -- # return 0 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.828 16:12:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:20.086 16:12:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:20.086 "name": "Existed_Raid", 00:35:20.086 "uuid": "1fe9f7a2-c41d-494a-96d9-808c493dc3fe", 00:35:20.086 "strip_size_kb": 0, 00:35:20.086 "state": "online", 00:35:20.086 "raid_level": "raid1", 00:35:20.086 "superblock": false, 00:35:20.086 "num_base_bdevs": 2, 00:35:20.086 "num_base_bdevs_discovered": 2, 00:35:20.086 "num_base_bdevs_operational": 2, 00:35:20.086 "base_bdevs_list": [ 00:35:20.086 { 00:35:20.086 "name": "BaseBdev1", 00:35:20.086 "uuid": "be7295a5-0574-4311-8937-4c5baa212dff", 00:35:20.086 "is_configured": true, 00:35:20.086 "data_offset": 0, 00:35:20.086 "data_size": 65536 00:35:20.086 }, 00:35:20.086 { 00:35:20.086 "name": "BaseBdev2", 00:35:20.086 "uuid": "dff5c20c-0099-4c5a-8a60-d4bb300f581f", 00:35:20.086 "is_configured": true, 00:35:20.086 "data_offset": 0, 00:35:20.086 "data_size": 65536 00:35:20.086 } 00:35:20.086 ] 00:35:20.086 }' 00:35:20.086 16:12:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:20.086 16:12:24 -- common/autotest_common.sh@10 -- # set +x 00:35:20.654 16:12:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:20.654 [2024-07-22 16:12:24.843698] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.913 16:12:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:21.172 16:12:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:21.172 "name": "Existed_Raid", 00:35:21.172 "uuid": "1fe9f7a2-c41d-494a-96d9-808c493dc3fe", 00:35:21.172 "strip_size_kb": 0, 00:35:21.172 "state": "online", 00:35:21.172 "raid_level": "raid1", 00:35:21.172 "superblock": false, 00:35:21.172 "num_base_bdevs": 2, 00:35:21.172 "num_base_bdevs_discovered": 1, 00:35:21.172 "num_base_bdevs_operational": 1, 00:35:21.172 "base_bdevs_list": [ 00:35:21.172 { 00:35:21.172 "name": null, 00:35:21.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:21.172 "is_configured": false, 00:35:21.172 "data_offset": 0, 00:35:21.172 "data_size": 65536 00:35:21.172 }, 00:35:21.172 { 00:35:21.172 "name": "BaseBdev2", 00:35:21.172 "uuid": "dff5c20c-0099-4c5a-8a60-d4bb300f581f", 00:35:21.172 "is_configured": true, 00:35:21.172 "data_offset": 0, 00:35:21.172 "data_size": 65536 00:35:21.172 } 00:35:21.172 ] 00:35:21.172 }' 00:35:21.172 16:12:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:21.172 16:12:25 -- common/autotest_common.sh@10 -- # set +x 00:35:21.431 16:12:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:21.431 16:12:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:21.431 16:12:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.431 16:12:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:21.689 16:12:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:21.689 16:12:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:21.689 16:12:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:21.948 [2024-07-22 16:12:25.975461] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:21.948 [2024-07-22 16:12:25.975521] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:21.948 [2024-07-22 16:12:25.975650] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:21.948 [2024-07-22 16:12:26.074095] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:21.948 [2024-07-22 16:12:26.074195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:35:21.948 16:12:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:21.948 16:12:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:21.948 16:12:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.948 16:12:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:22.206 16:12:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:22.206 16:12:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:22.206 16:12:26 -- bdev/bdev_raid.sh@287 -- # killprocess 71474 00:35:22.206 16:12:26 -- common/autotest_common.sh@926 -- # '[' -z 71474 ']' 00:35:22.206 16:12:26 -- common/autotest_common.sh@930 -- # kill -0 71474 00:35:22.206 16:12:26 -- common/autotest_common.sh@931 -- # uname 00:35:22.206 16:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:22.206 16:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71474 00:35:22.206 killing process with pid 71474 00:35:22.206 16:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:22.206 16:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:22.206 16:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71474' 00:35:22.206 16:12:26 -- common/autotest_common.sh@945 -- # kill 71474 00:35:22.206 16:12:26 -- common/autotest_common.sh@950 -- # wait 71474 00:35:22.206 [2024-07-22 16:12:26.362906] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:22.206 [2024-07-22 16:12:26.363096] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:23.583 ************************************ 00:35:23.583 END TEST raid_state_function_test 00:35:23.583 ************************************ 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:23.583 00:35:23.583 real 0m9.389s 00:35:23.583 user 0m14.943s 00:35:23.583 sys 0m1.649s 00:35:23.583 16:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:23.583 16:12:27 -- common/autotest_common.sh@10 -- # set +x 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:35:23.583 16:12:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:35:23.583 16:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:23.583 16:12:27 -- common/autotest_common.sh@10 -- # set +x 00:35:23.583 ************************************ 00:35:23.583 START TEST raid_state_function_test_sb 00:35:23.583 ************************************ 00:35:23.583 16:12:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:35:23.583 Process raid pid: 71766 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=71766 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71766' 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:23.583 16:12:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71766 /var/tmp/spdk-raid.sock 00:35:23.583 16:12:27 -- common/autotest_common.sh@819 -- # '[' -z 71766 ']' 00:35:23.583 16:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:23.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:23.583 16:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:23.583 16:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:23.583 16:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:23.583 16:12:27 -- common/autotest_common.sh@10 -- # set +x 00:35:23.583 [2024-07-22 16:12:27.797935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:23.583 [2024-07-22 16:12:27.798125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:23.842 [2024-07-22 16:12:27.972111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.101 [2024-07-22 16:12:28.271375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.360 [2024-07-22 16:12:28.489499] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:24.618 16:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:24.618 16:12:28 -- common/autotest_common.sh@852 -- # return 0 00:35:24.618 16:12:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:24.877 [2024-07-22 16:12:28.981495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:24.877 [2024-07-22 16:12:28.981565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:24.877 [2024-07-22 16:12:28.981581] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:24.877 [2024-07-22 16:12:28.981598] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.877 16:12:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:25.136 16:12:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:25.136 "name": "Existed_Raid", 00:35:25.136 "uuid": "b6be458a-c67e-4c58-bc84-2184aa526962", 00:35:25.136 "strip_size_kb": 0, 00:35:25.136 "state": "configuring", 00:35:25.136 "raid_level": "raid1", 00:35:25.136 "superblock": true, 00:35:25.136 "num_base_bdevs": 2, 00:35:25.136 "num_base_bdevs_discovered": 0, 00:35:25.136 "num_base_bdevs_operational": 2, 00:35:25.136 "base_bdevs_list": [ 00:35:25.136 { 00:35:25.136 "name": "BaseBdev1", 00:35:25.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.136 "is_configured": false, 00:35:25.136 "data_offset": 0, 00:35:25.136 "data_size": 0 00:35:25.136 }, 00:35:25.136 { 00:35:25.136 "name": "BaseBdev2", 00:35:25.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.136 "is_configured": false, 00:35:25.136 "data_offset": 0, 00:35:25.136 "data_size": 0 00:35:25.136 } 00:35:25.136 ] 00:35:25.136 }' 00:35:25.136 16:12:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:25.136 16:12:29 -- common/autotest_common.sh@10 -- # set +x 00:35:25.394 16:12:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:25.677 [2024-07-22 16:12:29.821717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:25.677 [2024-07-22 16:12:29.821794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:35:25.677 16:12:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:25.935 [2024-07-22 16:12:30.077835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:25.935 [2024-07-22 16:12:30.077911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:25.935 [2024-07-22 16:12:30.077936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:25.935 [2024-07-22 16:12:30.077953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:25.935 16:12:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:26.193 [2024-07-22 16:12:30.361841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:26.193 BaseBdev1 00:35:26.193 16:12:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:26.193 16:12:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:35:26.193 16:12:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:26.193 16:12:30 -- common/autotest_common.sh@889 -- # local i 00:35:26.193 16:12:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:26.193 16:12:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:26.193 16:12:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:26.451 16:12:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:26.709 [ 00:35:26.709 { 00:35:26.709 "name": "BaseBdev1", 00:35:26.709 "aliases": [ 00:35:26.709 "f1fcf7b6-b966-4f23-95a9-b249b8ab3935" 00:35:26.709 ], 00:35:26.709 "product_name": "Malloc disk", 00:35:26.709 "block_size": 512, 00:35:26.709 "num_blocks": 65536, 00:35:26.709 "uuid": "f1fcf7b6-b966-4f23-95a9-b249b8ab3935", 00:35:26.709 "assigned_rate_limits": { 00:35:26.709 "rw_ios_per_sec": 0, 00:35:26.709 "rw_mbytes_per_sec": 0, 00:35:26.709 "r_mbytes_per_sec": 0, 00:35:26.709 "w_mbytes_per_sec": 0 00:35:26.709 }, 00:35:26.709 "claimed": true, 00:35:26.709 "claim_type": "exclusive_write", 00:35:26.709 "zoned": false, 00:35:26.709 "supported_io_types": { 00:35:26.709 "read": true, 00:35:26.709 "write": true, 00:35:26.709 "unmap": true, 00:35:26.709 "write_zeroes": true, 00:35:26.709 "flush": true, 00:35:26.709 "reset": true, 00:35:26.709 "compare": false, 00:35:26.709 "compare_and_write": false, 00:35:26.709 "abort": true, 00:35:26.709 "nvme_admin": false, 00:35:26.709 "nvme_io": false 00:35:26.709 }, 00:35:26.709 "memory_domains": [ 00:35:26.709 { 00:35:26.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:26.709 "dma_device_type": 2 00:35:26.709 } 00:35:26.709 ], 00:35:26.709 "driver_specific": {} 00:35:26.709 } 00:35:26.709 ] 00:35:26.709 16:12:30 -- common/autotest_common.sh@895 -- # return 0 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.709 16:12:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:26.968 16:12:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:26.968 "name": "Existed_Raid", 00:35:26.968 "uuid": "449ce6b5-6c5b-432e-9927-34d2ee953025", 00:35:26.968 "strip_size_kb": 0, 00:35:26.968 "state": "configuring", 00:35:26.968 "raid_level": "raid1", 00:35:26.968 "superblock": true, 00:35:26.968 "num_base_bdevs": 2, 00:35:26.968 "num_base_bdevs_discovered": 1, 00:35:26.968 "num_base_bdevs_operational": 2, 00:35:26.968 "base_bdevs_list": [ 00:35:26.968 { 00:35:26.968 "name": "BaseBdev1", 00:35:26.968 "uuid": "f1fcf7b6-b966-4f23-95a9-b249b8ab3935", 00:35:26.968 "is_configured": true, 00:35:26.968 "data_offset": 2048, 00:35:26.968 "data_size": 63488 00:35:26.968 }, 00:35:26.968 { 00:35:26.968 "name": "BaseBdev2", 00:35:26.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.968 "is_configured": false, 00:35:26.968 "data_offset": 0, 00:35:26.968 "data_size": 0 00:35:26.968 } 00:35:26.968 ] 00:35:26.968 }' 00:35:26.968 16:12:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:26.968 16:12:31 -- common/autotest_common.sh@10 -- # set +x 00:35:27.226 16:12:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:27.484 [2024-07-22 16:12:31.574646] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:27.484 [2024-07-22 16:12:31.576625] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:35:27.484 16:12:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:35:27.484 16:12:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:27.743 16:12:31 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:28.001 BaseBdev1 00:35:28.001 16:12:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:35:28.001 16:12:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:35:28.001 16:12:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:28.001 16:12:32 -- common/autotest_common.sh@889 -- # local i 00:35:28.001 16:12:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:28.001 16:12:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:28.001 16:12:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:28.259 16:12:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:28.517 [ 00:35:28.517 { 00:35:28.517 "name": "BaseBdev1", 00:35:28.517 "aliases": [ 00:35:28.517 "1d984854-ca84-40c5-9f3f-554ba6c20d8f" 00:35:28.517 ], 00:35:28.517 "product_name": "Malloc disk", 00:35:28.517 "block_size": 512, 00:35:28.517 "num_blocks": 65536, 00:35:28.517 "uuid": "1d984854-ca84-40c5-9f3f-554ba6c20d8f", 00:35:28.517 "assigned_rate_limits": { 00:35:28.517 "rw_ios_per_sec": 0, 00:35:28.517 "rw_mbytes_per_sec": 0, 00:35:28.517 "r_mbytes_per_sec": 0, 00:35:28.517 "w_mbytes_per_sec": 0 00:35:28.517 }, 00:35:28.517 "claimed": false, 00:35:28.517 "zoned": false, 00:35:28.517 "supported_io_types": { 00:35:28.517 "read": true, 00:35:28.517 "write": true, 00:35:28.517 "unmap": true, 00:35:28.517 "write_zeroes": true, 00:35:28.517 "flush": true, 00:35:28.517 "reset": true, 00:35:28.517 "compare": false, 00:35:28.517 "compare_and_write": false, 00:35:28.517 "abort": true, 00:35:28.517 "nvme_admin": false, 00:35:28.517 "nvme_io": false 00:35:28.517 }, 00:35:28.517 "memory_domains": [ 00:35:28.517 { 00:35:28.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:28.517 "dma_device_type": 2 00:35:28.517 } 00:35:28.517 ], 00:35:28.517 "driver_specific": {} 00:35:28.517 } 00:35:28.517 ] 00:35:28.517 16:12:32 -- common/autotest_common.sh@895 -- # return 0 00:35:28.517 16:12:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:28.775 [2024-07-22 16:12:32.956549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:28.776 [2024-07-22 16:12:32.959151] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:28.776 [2024-07-22 16:12:32.959222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.776 16:12:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:29.033 16:12:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:29.033 "name": "Existed_Raid", 00:35:29.033 "uuid": "fe5c5125-8661-49f8-aae6-f0c31106f267", 00:35:29.033 "strip_size_kb": 0, 00:35:29.033 "state": "configuring", 00:35:29.033 "raid_level": "raid1", 00:35:29.033 "superblock": true, 00:35:29.033 "num_base_bdevs": 2, 00:35:29.033 "num_base_bdevs_discovered": 1, 00:35:29.033 "num_base_bdevs_operational": 2, 00:35:29.033 "base_bdevs_list": [ 00:35:29.033 { 00:35:29.033 "name": "BaseBdev1", 00:35:29.033 "uuid": "1d984854-ca84-40c5-9f3f-554ba6c20d8f", 00:35:29.033 "is_configured": true, 00:35:29.033 "data_offset": 2048, 00:35:29.033 "data_size": 63488 00:35:29.033 }, 00:35:29.033 { 00:35:29.033 "name": "BaseBdev2", 00:35:29.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:29.033 "is_configured": false, 00:35:29.033 "data_offset": 0, 00:35:29.033 "data_size": 0 00:35:29.033 } 00:35:29.033 ] 00:35:29.033 }' 00:35:29.033 16:12:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:29.033 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:35:29.612 16:12:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:29.612 [2024-07-22 16:12:33.872875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:29.612 [2024-07-22 16:12:33.873428] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:35:29.612 [2024-07-22 16:12:33.873454] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:29.612 [2024-07-22 16:12:33.873589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:35:29.612 [2024-07-22 16:12:33.874010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:35:29.612 [2024-07-22 16:12:33.874035] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:35:29.612 BaseBdev2 00:35:29.612 [2024-07-22 16:12:33.874202] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:29.882 16:12:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:29.882 16:12:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:35:29.882 16:12:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:29.882 16:12:33 -- common/autotest_common.sh@889 -- # local i 00:35:29.882 16:12:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:29.882 16:12:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:29.882 16:12:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:30.140 16:12:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:30.140 [ 00:35:30.140 { 00:35:30.140 "name": "BaseBdev2", 00:35:30.140 "aliases": [ 00:35:30.140 "17078986-3008-4a49-ad4a-7f925cc2b3e4" 00:35:30.140 ], 00:35:30.140 "product_name": "Malloc disk", 00:35:30.140 "block_size": 512, 00:35:30.140 "num_blocks": 65536, 00:35:30.140 "uuid": "17078986-3008-4a49-ad4a-7f925cc2b3e4", 00:35:30.140 "assigned_rate_limits": { 00:35:30.140 "rw_ios_per_sec": 0, 00:35:30.140 "rw_mbytes_per_sec": 0, 00:35:30.140 "r_mbytes_per_sec": 0, 00:35:30.140 "w_mbytes_per_sec": 0 00:35:30.140 }, 00:35:30.140 "claimed": true, 00:35:30.140 "claim_type": "exclusive_write", 00:35:30.140 "zoned": false, 00:35:30.140 "supported_io_types": { 00:35:30.140 "read": true, 00:35:30.140 "write": true, 00:35:30.140 "unmap": true, 00:35:30.140 "write_zeroes": true, 00:35:30.140 "flush": true, 00:35:30.140 "reset": true, 00:35:30.140 "compare": false, 00:35:30.140 "compare_and_write": false, 00:35:30.140 "abort": true, 00:35:30.140 "nvme_admin": false, 00:35:30.140 "nvme_io": false 00:35:30.140 }, 00:35:30.140 "memory_domains": [ 00:35:30.140 { 00:35:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:30.140 "dma_device_type": 2 00:35:30.140 } 00:35:30.140 ], 00:35:30.140 "driver_specific": {} 00:35:30.140 } 00:35:30.140 ] 00:35:30.140 16:12:34 -- common/autotest_common.sh@895 -- # return 0 00:35:30.140 16:12:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:30.140 16:12:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.141 16:12:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:30.399 16:12:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:30.399 "name": "Existed_Raid", 00:35:30.399 "uuid": "fe5c5125-8661-49f8-aae6-f0c31106f267", 00:35:30.399 "strip_size_kb": 0, 00:35:30.399 "state": "online", 00:35:30.399 "raid_level": "raid1", 00:35:30.399 "superblock": true, 00:35:30.399 "num_base_bdevs": 2, 00:35:30.399 "num_base_bdevs_discovered": 2, 00:35:30.399 "num_base_bdevs_operational": 2, 00:35:30.399 "base_bdevs_list": [ 00:35:30.399 { 00:35:30.399 "name": "BaseBdev1", 00:35:30.399 "uuid": "1d984854-ca84-40c5-9f3f-554ba6c20d8f", 00:35:30.399 "is_configured": true, 00:35:30.399 "data_offset": 2048, 00:35:30.399 "data_size": 63488 00:35:30.399 }, 00:35:30.399 { 00:35:30.399 "name": "BaseBdev2", 00:35:30.399 "uuid": "17078986-3008-4a49-ad4a-7f925cc2b3e4", 00:35:30.399 "is_configured": true, 00:35:30.399 "data_offset": 2048, 00:35:30.399 "data_size": 63488 00:35:30.399 } 00:35:30.399 ] 00:35:30.399 }' 00:35:30.399 16:12:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:30.399 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:35:30.966 16:12:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:30.966 [2024-07-22 16:12:35.189381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:31.224 16:12:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.225 16:12:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:31.483 16:12:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:31.483 "name": "Existed_Raid", 00:35:31.483 "uuid": "fe5c5125-8661-49f8-aae6-f0c31106f267", 00:35:31.483 "strip_size_kb": 0, 00:35:31.483 "state": "online", 00:35:31.483 "raid_level": "raid1", 00:35:31.483 "superblock": true, 00:35:31.483 "num_base_bdevs": 2, 00:35:31.483 "num_base_bdevs_discovered": 1, 00:35:31.483 "num_base_bdevs_operational": 1, 00:35:31.483 "base_bdevs_list": [ 00:35:31.483 { 00:35:31.483 "name": null, 00:35:31.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:31.483 "is_configured": false, 00:35:31.483 "data_offset": 2048, 00:35:31.483 "data_size": 63488 00:35:31.483 }, 00:35:31.483 { 00:35:31.483 "name": "BaseBdev2", 00:35:31.483 "uuid": "17078986-3008-4a49-ad4a-7f925cc2b3e4", 00:35:31.483 "is_configured": true, 00:35:31.483 "data_offset": 2048, 00:35:31.483 "data_size": 63488 00:35:31.483 } 00:35:31.483 ] 00:35:31.483 }' 00:35:31.483 16:12:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:31.483 16:12:35 -- common/autotest_common.sh@10 -- # set +x 00:35:31.742 16:12:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:31.742 16:12:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:31.742 16:12:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.742 16:12:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:32.000 16:12:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:32.000 16:12:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:32.000 16:12:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:32.000 [2024-07-22 16:12:36.268521] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:32.000 [2024-07-22 16:12:36.268573] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:32.000 [2024-07-22 16:12:36.268636] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:32.259 [2024-07-22 16:12:36.359903] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:32.259 [2024-07-22 16:12:36.360120] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:35:32.259 16:12:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:32.259 16:12:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:32.259 16:12:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.259 16:12:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:32.518 16:12:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:32.518 16:12:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:32.518 16:12:36 -- bdev/bdev_raid.sh@287 -- # killprocess 71766 00:35:32.518 16:12:36 -- common/autotest_common.sh@926 -- # '[' -z 71766 ']' 00:35:32.518 16:12:36 -- common/autotest_common.sh@930 -- # kill -0 71766 00:35:32.518 16:12:36 -- common/autotest_common.sh@931 -- # uname 00:35:32.518 16:12:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:32.518 16:12:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71766 00:35:32.518 killing process with pid 71766 00:35:32.518 16:12:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:32.518 16:12:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:32.518 16:12:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71766' 00:35:32.518 16:12:36 -- common/autotest_common.sh@945 -- # kill 71766 00:35:32.518 [2024-07-22 16:12:36.676966] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:32.518 16:12:36 -- common/autotest_common.sh@950 -- # wait 71766 00:35:32.518 [2024-07-22 16:12:36.677113] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:33.893 16:12:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:33.893 00:35:33.893 real 0m10.241s 00:35:33.893 user 0m16.479s 00:35:33.893 sys 0m1.675s 00:35:33.893 16:12:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:33.893 ************************************ 00:35:33.893 END TEST raid_state_function_test_sb 00:35:33.893 ************************************ 00:35:33.893 16:12:37 -- common/autotest_common.sh@10 -- # set +x 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:35:33.893 16:12:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:35:33.893 16:12:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:33.893 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:35:33.893 ************************************ 00:35:33.893 START TEST raid_superblock_test 00:35:33.893 ************************************ 00:35:33.893 16:12:38 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=72072 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:33.893 16:12:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 72072 /var/tmp/spdk-raid.sock 00:35:33.893 16:12:38 -- common/autotest_common.sh@819 -- # '[' -z 72072 ']' 00:35:33.893 16:12:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:33.893 16:12:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:33.893 16:12:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:33.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:33.893 16:12:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:33.893 16:12:38 -- common/autotest_common.sh@10 -- # set +x 00:35:33.893 [2024-07-22 16:12:38.097438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:33.893 [2024-07-22 16:12:38.097950] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72072 ] 00:35:34.152 [2024-07-22 16:12:38.278467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.410 [2024-07-22 16:12:38.578020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.670 [2024-07-22 16:12:38.832557] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:34.928 16:12:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:34.928 16:12:39 -- common/autotest_common.sh@852 -- # return 0 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:34.928 16:12:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:35:35.186 malloc1 00:35:35.186 16:12:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:35.444 [2024-07-22 16:12:39.475524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:35.444 [2024-07-22 16:12:39.475684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.444 [2024-07-22 16:12:39.475736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:35:35.444 [2024-07-22 16:12:39.475755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.444 [2024-07-22 16:12:39.479717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.444 [2024-07-22 16:12:39.479770] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:35.444 pt1 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:35.444 16:12:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:35.445 16:12:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:35:35.445 16:12:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:35.445 16:12:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:35:35.703 malloc2 00:35:35.703 16:12:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:35.703 [2024-07-22 16:12:39.975039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:35.703 [2024-07-22 16:12:39.975173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.703 [2024-07-22 16:12:39.975215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:35:35.703 [2024-07-22 16:12:39.975233] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.962 [2024-07-22 16:12:39.978409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.962 [2024-07-22 16:12:39.978486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:35.962 pt2 00:35:35.962 16:12:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:35:35.962 16:12:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:35:35.962 16:12:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:36.220 [2024-07-22 16:12:40.267404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:36.221 [2024-07-22 16:12:40.270317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:36.221 [2024-07-22 16:12:40.270725] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:35:36.221 [2024-07-22 16:12:40.270875] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:36.221 [2024-07-22 16:12:40.271224] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:35:36.221 [2024-07-22 16:12:40.271873] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:35:36.221 [2024-07-22 16:12:40.272025] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:35:36.221 [2024-07-22 16:12:40.272424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.221 16:12:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.480 16:12:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:36.480 "name": "raid_bdev1", 00:35:36.480 "uuid": "b2821c63-8bbe-4fe9-b742-61593e41a35a", 00:35:36.480 "strip_size_kb": 0, 00:35:36.480 "state": "online", 00:35:36.480 "raid_level": "raid1", 00:35:36.480 "superblock": true, 00:35:36.480 "num_base_bdevs": 2, 00:35:36.480 "num_base_bdevs_discovered": 2, 00:35:36.480 "num_base_bdevs_operational": 2, 00:35:36.480 "base_bdevs_list": [ 00:35:36.480 { 00:35:36.480 "name": "pt1", 00:35:36.480 "uuid": "abd9ad65-a26f-5694-9db9-afd7bc158e25", 00:35:36.480 "is_configured": true, 00:35:36.480 "data_offset": 2048, 00:35:36.480 "data_size": 63488 00:35:36.480 }, 00:35:36.480 { 00:35:36.480 "name": "pt2", 00:35:36.480 "uuid": "7ae2cd25-ce77-5d08-b625-04237a42feca", 00:35:36.480 "is_configured": true, 00:35:36.480 "data_offset": 2048, 00:35:36.480 "data_size": 63488 00:35:36.480 } 00:35:36.480 ] 00:35:36.480 }' 00:35:36.480 16:12:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:36.480 16:12:40 -- common/autotest_common.sh@10 -- # set +x 00:35:36.739 16:12:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:36.739 16:12:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:35:37.043 [2024-07-22 16:12:41.157231] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:37.043 16:12:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b2821c63-8bbe-4fe9-b742-61593e41a35a 00:35:37.043 16:12:41 -- bdev/bdev_raid.sh@380 -- # '[' -z b2821c63-8bbe-4fe9-b742-61593e41a35a ']' 00:35:37.043 16:12:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:37.301 [2024-07-22 16:12:41.392896] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:37.301 [2024-07-22 16:12:41.392991] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:37.301 [2024-07-22 16:12:41.393154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:37.301 [2024-07-22 16:12:41.393243] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:37.301 [2024-07-22 16:12:41.393260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:35:37.301 16:12:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:37.301 16:12:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:35:37.559 16:12:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:35:37.560 16:12:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:35:37.560 16:12:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:37.560 16:12:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:37.818 16:12:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:35:37.818 16:12:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:38.076 16:12:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:38.076 16:12:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:38.335 16:12:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:35:38.335 16:12:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:38.335 16:12:42 -- common/autotest_common.sh@640 -- # local es=0 00:35:38.335 16:12:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:38.335 16:12:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:38.335 16:12:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:38.335 16:12:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:38.335 16:12:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:38.335 16:12:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:38.335 16:12:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:35:38.335 16:12:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:38.335 16:12:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:38.335 16:12:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:38.594 [2024-07-22 16:12:42.613334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:38.594 [2024-07-22 16:12:42.617954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:38.594 [2024-07-22 16:12:42.618065] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:35:38.594 [2024-07-22 16:12:42.618204] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:35:38.594 [2024-07-22 16:12:42.618238] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:38.594 [2024-07-22 16:12:42.618252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:35:38.594 request: 00:35:38.594 { 00:35:38.594 "name": "raid_bdev1", 00:35:38.594 "raid_level": "raid1", 00:35:38.594 "base_bdevs": [ 00:35:38.594 "malloc1", 00:35:38.594 "malloc2" 00:35:38.594 ], 00:35:38.594 "superblock": false, 00:35:38.594 "method": "bdev_raid_create", 00:35:38.594 "req_id": 1 00:35:38.594 } 00:35:38.594 Got JSON-RPC error response 00:35:38.594 response: 00:35:38.594 { 00:35:38.594 "code": -17, 00:35:38.594 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:38.594 } 00:35:38.594 16:12:42 -- common/autotest_common.sh@643 -- # es=1 00:35:38.595 16:12:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:35:38.595 16:12:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:35:38.595 16:12:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:35:38.595 16:12:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.595 16:12:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:35:38.852 16:12:42 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:35:38.852 16:12:42 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:35:38.852 16:12:42 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:38.852 [2024-07-22 16:12:43.118513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:38.852 [2024-07-22 16:12:43.118614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:38.852 [2024-07-22 16:12:43.118669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:35:38.852 [2024-07-22 16:12:43.118684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:38.852 [2024-07-22 16:12:43.121937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:38.852 pt1 00:35:38.852 [2024-07-22 16:12:43.122298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:38.852 [2024-07-22 16:12:43.122464] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:35:38.852 [2024-07-22 16:12:43.122531] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.110 16:12:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.369 16:12:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:39.369 "name": "raid_bdev1", 00:35:39.369 "uuid": "b2821c63-8bbe-4fe9-b742-61593e41a35a", 00:35:39.369 "strip_size_kb": 0, 00:35:39.369 "state": "configuring", 00:35:39.369 "raid_level": "raid1", 00:35:39.369 "superblock": true, 00:35:39.369 "num_base_bdevs": 2, 00:35:39.369 "num_base_bdevs_discovered": 1, 00:35:39.369 "num_base_bdevs_operational": 2, 00:35:39.369 "base_bdevs_list": [ 00:35:39.369 { 00:35:39.369 "name": "pt1", 00:35:39.369 "uuid": "abd9ad65-a26f-5694-9db9-afd7bc158e25", 00:35:39.369 "is_configured": true, 00:35:39.369 "data_offset": 2048, 00:35:39.369 "data_size": 63488 00:35:39.369 }, 00:35:39.369 { 00:35:39.369 "name": null, 00:35:39.369 "uuid": "7ae2cd25-ce77-5d08-b625-04237a42feca", 00:35:39.369 "is_configured": false, 00:35:39.369 "data_offset": 2048, 00:35:39.369 "data_size": 63488 00:35:39.369 } 00:35:39.369 ] 00:35:39.369 }' 00:35:39.369 16:12:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:39.369 16:12:43 -- common/autotest_common.sh@10 -- # set +x 00:35:39.627 16:12:43 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:35:39.627 16:12:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:35:39.627 16:12:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:39.627 16:12:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:39.886 [2024-07-22 16:12:43.998943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:39.886 [2024-07-22 16:12:43.999145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:39.886 [2024-07-22 16:12:43.999194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:35:39.886 [2024-07-22 16:12:43.999226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:39.886 [2024-07-22 16:12:44.000000] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:39.886 [2024-07-22 16:12:44.000077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:39.886 [2024-07-22 16:12:44.000398] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:39.886 [2024-07-22 16:12:44.000441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:39.886 [2024-07-22 16:12:44.000669] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:35:39.886 [2024-07-22 16:12:44.000689] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:39.886 [2024-07-22 16:12:44.000838] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:35:39.886 [2024-07-22 16:12:44.001286] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:35:39.886 [2024-07-22 16:12:44.001308] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:35:39.886 [2024-07-22 16:12:44.001458] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:39.886 pt2 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.886 16:12:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.145 16:12:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:40.145 "name": "raid_bdev1", 00:35:40.145 "uuid": "b2821c63-8bbe-4fe9-b742-61593e41a35a", 00:35:40.145 "strip_size_kb": 0, 00:35:40.145 "state": "online", 00:35:40.145 "raid_level": "raid1", 00:35:40.145 "superblock": true, 00:35:40.145 "num_base_bdevs": 2, 00:35:40.145 "num_base_bdevs_discovered": 2, 00:35:40.145 "num_base_bdevs_operational": 2, 00:35:40.145 "base_bdevs_list": [ 00:35:40.145 { 00:35:40.145 "name": "pt1", 00:35:40.145 "uuid": "abd9ad65-a26f-5694-9db9-afd7bc158e25", 00:35:40.145 "is_configured": true, 00:35:40.145 "data_offset": 2048, 00:35:40.145 "data_size": 63488 00:35:40.145 }, 00:35:40.145 { 00:35:40.145 "name": "pt2", 00:35:40.145 "uuid": "7ae2cd25-ce77-5d08-b625-04237a42feca", 00:35:40.145 "is_configured": true, 00:35:40.145 "data_offset": 2048, 00:35:40.145 "data_size": 63488 00:35:40.145 } 00:35:40.145 ] 00:35:40.145 }' 00:35:40.145 16:12:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:40.145 16:12:44 -- common/autotest_common.sh@10 -- # set +x 00:35:40.404 16:12:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:40.404 16:12:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:35:40.663 [2024-07-22 16:12:44.819612] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:40.663 16:12:44 -- bdev/bdev_raid.sh@430 -- # '[' b2821c63-8bbe-4fe9-b742-61593e41a35a '!=' b2821c63-8bbe-4fe9-b742-61593e41a35a ']' 00:35:40.663 16:12:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:35:40.663 16:12:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:40.663 16:12:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:35:40.663 16:12:44 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:40.921 [2024-07-22 16:12:45.091517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.921 16:12:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.180 16:12:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:41.180 "name": "raid_bdev1", 00:35:41.180 "uuid": "b2821c63-8bbe-4fe9-b742-61593e41a35a", 00:35:41.180 "strip_size_kb": 0, 00:35:41.180 "state": "online", 00:35:41.180 "raid_level": "raid1", 00:35:41.180 "superblock": true, 00:35:41.180 "num_base_bdevs": 2, 00:35:41.180 "num_base_bdevs_discovered": 1, 00:35:41.180 "num_base_bdevs_operational": 1, 00:35:41.180 "base_bdevs_list": [ 00:35:41.180 { 00:35:41.180 "name": null, 00:35:41.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:41.180 "is_configured": false, 00:35:41.180 "data_offset": 2048, 00:35:41.180 "data_size": 63488 00:35:41.180 }, 00:35:41.180 { 00:35:41.180 "name": "pt2", 00:35:41.180 "uuid": "7ae2cd25-ce77-5d08-b625-04237a42feca", 00:35:41.180 "is_configured": true, 00:35:41.180 "data_offset": 2048, 00:35:41.180 "data_size": 63488 00:35:41.180 } 00:35:41.180 ] 00:35:41.180 }' 00:35:41.180 16:12:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:41.180 16:12:45 -- common/autotest_common.sh@10 -- # set +x 00:35:41.746 16:12:45 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:41.746 [2024-07-22 16:12:45.987781] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:41.746 [2024-07-22 16:12:45.987860] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:41.746 [2024-07-22 16:12:45.987977] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:41.746 [2024-07-22 16:12:45.988342] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:41.746 [2024-07-22 16:12:45.988379] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:35:41.746 16:12:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:35:41.746 16:12:46 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@462 -- # i=1 00:35:42.313 16:12:46 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:42.572 [2024-07-22 16:12:46.728017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:42.572 [2024-07-22 16:12:46.728383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:42.572 [2024-07-22 16:12:46.728473] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:35:42.572 [2024-07-22 16:12:46.728802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:42.572 [2024-07-22 16:12:46.732394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:42.572 [2024-07-22 16:12:46.732615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:42.572 [2024-07-22 16:12:46.732906] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:35:42.572 [2024-07-22 16:12:46.733001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:42.572 [2024-07-22 16:12:46.733268] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:35:42.572 [2024-07-22 16:12:46.733308] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:42.572 [2024-07-22 16:12:46.733444] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:35:42.572 [2024-07-22 16:12:46.733926] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:35:42.572 [2024-07-22 16:12:46.733942] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:35:42.572 pt2 00:35:42.572 [2024-07-22 16:12:46.734131] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.572 16:12:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.830 16:12:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:42.830 "name": "raid_bdev1", 00:35:42.830 "uuid": "b2821c63-8bbe-4fe9-b742-61593e41a35a", 00:35:42.830 "strip_size_kb": 0, 00:35:42.830 "state": "online", 00:35:42.830 "raid_level": "raid1", 00:35:42.830 "superblock": true, 00:35:42.830 "num_base_bdevs": 2, 00:35:42.830 "num_base_bdevs_discovered": 1, 00:35:42.830 "num_base_bdevs_operational": 1, 00:35:42.830 "base_bdevs_list": [ 00:35:42.830 { 00:35:42.830 "name": null, 00:35:42.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.830 "is_configured": false, 00:35:42.830 "data_offset": 2048, 00:35:42.830 "data_size": 63488 00:35:42.830 }, 00:35:42.830 { 00:35:42.830 "name": "pt2", 00:35:42.830 "uuid": "7ae2cd25-ce77-5d08-b625-04237a42feca", 00:35:42.830 "is_configured": true, 00:35:42.830 "data_offset": 2048, 00:35:42.830 "data_size": 63488 00:35:42.830 } 00:35:42.830 ] 00:35:42.830 }' 00:35:42.830 16:12:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:42.830 16:12:47 -- common/autotest_common.sh@10 -- # set +x 00:35:43.089 16:12:47 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:35:43.089 16:12:47 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:43.089 16:12:47 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:35:43.347 [2024-07-22 16:12:47.569823] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:43.347 16:12:47 -- bdev/bdev_raid.sh@506 -- # '[' b2821c63-8bbe-4fe9-b742-61593e41a35a '!=' b2821c63-8bbe-4fe9-b742-61593e41a35a ']' 00:35:43.347 16:12:47 -- bdev/bdev_raid.sh@511 -- # killprocess 72072 00:35:43.347 16:12:47 -- common/autotest_common.sh@926 -- # '[' -z 72072 ']' 00:35:43.347 16:12:47 -- common/autotest_common.sh@930 -- # kill -0 72072 00:35:43.347 16:12:47 -- common/autotest_common.sh@931 -- # uname 00:35:43.347 16:12:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:43.347 16:12:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72072 00:35:43.606 16:12:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:43.606 16:12:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:43.606 killing process with pid 72072 00:35:43.606 16:12:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72072' 00:35:43.606 16:12:47 -- common/autotest_common.sh@945 -- # kill 72072 00:35:43.606 [2024-07-22 16:12:47.625640] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:43.606 16:12:47 -- common/autotest_common.sh@950 -- # wait 72072 00:35:43.606 [2024-07-22 16:12:47.625745] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:43.606 [2024-07-22 16:12:47.625825] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:43.606 [2024-07-22 16:12:47.625842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:35:43.606 [2024-07-22 16:12:47.808721] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:44.980 ************************************ 00:35:44.980 END TEST raid_superblock_test 00:35:44.980 ************************************ 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:35:44.980 00:35:44.980 real 0m11.123s 00:35:44.980 user 0m18.030s 00:35:44.980 sys 0m1.959s 00:35:44.980 16:12:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:44.980 16:12:49 -- common/autotest_common.sh@10 -- # set +x 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:35:44.980 16:12:49 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:35:44.980 16:12:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:44.980 16:12:49 -- common/autotest_common.sh@10 -- # set +x 00:35:44.980 ************************************ 00:35:44.980 START TEST raid_state_function_test 00:35:44.980 ************************************ 00:35:44.980 16:12:49 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:35:44.980 16:12:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=72399 00:35:44.981 Process raid pid: 72399 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72399' 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72399 /var/tmp/spdk-raid.sock 00:35:44.981 16:12:49 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:44.981 16:12:49 -- common/autotest_common.sh@819 -- # '[' -z 72399 ']' 00:35:44.981 16:12:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:44.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:44.981 16:12:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:44.981 16:12:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:44.981 16:12:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:44.981 16:12:49 -- common/autotest_common.sh@10 -- # set +x 00:35:45.240 [2024-07-22 16:12:49.280304] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:45.240 [2024-07-22 16:12:49.280511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.240 [2024-07-22 16:12:49.459156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.499 [2024-07-22 16:12:49.741252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.758 [2024-07-22 16:12:49.973020] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:46.017 16:12:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:46.017 16:12:50 -- common/autotest_common.sh@852 -- # return 0 00:35:46.017 16:12:50 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:35:46.275 [2024-07-22 16:12:50.491578] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:46.275 [2024-07-22 16:12:50.491684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:46.275 [2024-07-22 16:12:50.491711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:46.275 [2024-07-22 16:12:50.491736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:46.275 [2024-07-22 16:12:50.491769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:46.275 [2024-07-22 16:12:50.491799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.275 16:12:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:46.534 16:12:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:46.534 "name": "Existed_Raid", 00:35:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.534 "strip_size_kb": 64, 00:35:46.534 "state": "configuring", 00:35:46.534 "raid_level": "raid0", 00:35:46.534 "superblock": false, 00:35:46.534 "num_base_bdevs": 3, 00:35:46.534 "num_base_bdevs_discovered": 0, 00:35:46.534 "num_base_bdevs_operational": 3, 00:35:46.534 "base_bdevs_list": [ 00:35:46.534 { 00:35:46.534 "name": "BaseBdev1", 00:35:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.534 "is_configured": false, 00:35:46.534 "data_offset": 0, 00:35:46.534 "data_size": 0 00:35:46.534 }, 00:35:46.534 { 00:35:46.534 "name": "BaseBdev2", 00:35:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.534 "is_configured": false, 00:35:46.534 "data_offset": 0, 00:35:46.534 "data_size": 0 00:35:46.534 }, 00:35:46.534 { 00:35:46.534 "name": "BaseBdev3", 00:35:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.534 "is_configured": false, 00:35:46.534 "data_offset": 0, 00:35:46.534 "data_size": 0 00:35:46.534 } 00:35:46.534 ] 00:35:46.534 }' 00:35:46.534 16:12:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:46.534 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:35:47.101 16:12:51 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:47.101 [2024-07-22 16:12:51.335698] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:47.101 [2024-07-22 16:12:51.335779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:35:47.101 16:12:51 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:35:47.667 [2024-07-22 16:12:51.647895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:47.667 [2024-07-22 16:12:51.647982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:47.667 [2024-07-22 16:12:51.648043] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:47.667 [2024-07-22 16:12:51.648065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:47.667 [2024-07-22 16:12:51.648075] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:47.667 [2024-07-22 16:12:51.648089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:47.667 16:12:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:47.927 [2024-07-22 16:12:51.947686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:47.927 BaseBdev1 00:35:47.927 16:12:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:47.927 16:12:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:35:47.927 16:12:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:47.927 16:12:51 -- common/autotest_common.sh@889 -- # local i 00:35:47.927 16:12:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:47.927 16:12:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:47.927 16:12:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:48.185 16:12:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:48.185 [ 00:35:48.185 { 00:35:48.185 "name": "BaseBdev1", 00:35:48.185 "aliases": [ 00:35:48.185 "8bf63afe-55a0-4282-ac84-03a6805d6c68" 00:35:48.185 ], 00:35:48.185 "product_name": "Malloc disk", 00:35:48.185 "block_size": 512, 00:35:48.185 "num_blocks": 65536, 00:35:48.185 "uuid": "8bf63afe-55a0-4282-ac84-03a6805d6c68", 00:35:48.185 "assigned_rate_limits": { 00:35:48.185 "rw_ios_per_sec": 0, 00:35:48.185 "rw_mbytes_per_sec": 0, 00:35:48.185 "r_mbytes_per_sec": 0, 00:35:48.185 "w_mbytes_per_sec": 0 00:35:48.185 }, 00:35:48.185 "claimed": true, 00:35:48.185 "claim_type": "exclusive_write", 00:35:48.185 "zoned": false, 00:35:48.186 "supported_io_types": { 00:35:48.186 "read": true, 00:35:48.186 "write": true, 00:35:48.186 "unmap": true, 00:35:48.186 "write_zeroes": true, 00:35:48.186 "flush": true, 00:35:48.186 "reset": true, 00:35:48.186 "compare": false, 00:35:48.186 "compare_and_write": false, 00:35:48.186 "abort": true, 00:35:48.186 "nvme_admin": false, 00:35:48.186 "nvme_io": false 00:35:48.186 }, 00:35:48.186 "memory_domains": [ 00:35:48.186 { 00:35:48.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:48.186 "dma_device_type": 2 00:35:48.186 } 00:35:48.186 ], 00:35:48.186 "driver_specific": {} 00:35:48.186 } 00:35:48.186 ] 00:35:48.186 16:12:52 -- common/autotest_common.sh@895 -- # return 0 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.186 16:12:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:48.443 16:12:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:48.443 "name": "Existed_Raid", 00:35:48.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.443 "strip_size_kb": 64, 00:35:48.443 "state": "configuring", 00:35:48.443 "raid_level": "raid0", 00:35:48.443 "superblock": false, 00:35:48.443 "num_base_bdevs": 3, 00:35:48.443 "num_base_bdevs_discovered": 1, 00:35:48.443 "num_base_bdevs_operational": 3, 00:35:48.443 "base_bdevs_list": [ 00:35:48.443 { 00:35:48.443 "name": "BaseBdev1", 00:35:48.443 "uuid": "8bf63afe-55a0-4282-ac84-03a6805d6c68", 00:35:48.443 "is_configured": true, 00:35:48.443 "data_offset": 0, 00:35:48.443 "data_size": 65536 00:35:48.443 }, 00:35:48.443 { 00:35:48.443 "name": "BaseBdev2", 00:35:48.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.443 "is_configured": false, 00:35:48.443 "data_offset": 0, 00:35:48.443 "data_size": 0 00:35:48.443 }, 00:35:48.443 { 00:35:48.443 "name": "BaseBdev3", 00:35:48.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.443 "is_configured": false, 00:35:48.443 "data_offset": 0, 00:35:48.443 "data_size": 0 00:35:48.443 } 00:35:48.443 ] 00:35:48.443 }' 00:35:48.443 16:12:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:48.443 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:35:49.007 16:12:53 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:49.264 [2024-07-22 16:12:53.320395] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:49.264 [2024-07-22 16:12:53.320520] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:35:49.264 16:12:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:35:49.264 16:12:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:35:49.522 [2024-07-22 16:12:53.592534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:49.522 [2024-07-22 16:12:53.595182] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:49.522 [2024-07-22 16:12:53.595241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:49.522 [2024-07-22 16:12:53.595257] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:49.522 [2024-07-22 16:12:53.595302] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.522 16:12:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.779 16:12:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:49.779 "name": "Existed_Raid", 00:35:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.779 "strip_size_kb": 64, 00:35:49.779 "state": "configuring", 00:35:49.779 "raid_level": "raid0", 00:35:49.779 "superblock": false, 00:35:49.779 "num_base_bdevs": 3, 00:35:49.779 "num_base_bdevs_discovered": 1, 00:35:49.779 "num_base_bdevs_operational": 3, 00:35:49.779 "base_bdevs_list": [ 00:35:49.779 { 00:35:49.779 "name": "BaseBdev1", 00:35:49.779 "uuid": "8bf63afe-55a0-4282-ac84-03a6805d6c68", 00:35:49.779 "is_configured": true, 00:35:49.779 "data_offset": 0, 00:35:49.779 "data_size": 65536 00:35:49.779 }, 00:35:49.779 { 00:35:49.779 "name": "BaseBdev2", 00:35:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.779 "is_configured": false, 00:35:49.779 "data_offset": 0, 00:35:49.779 "data_size": 0 00:35:49.779 }, 00:35:49.779 { 00:35:49.779 "name": "BaseBdev3", 00:35:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.779 "is_configured": false, 00:35:49.779 "data_offset": 0, 00:35:49.779 "data_size": 0 00:35:49.779 } 00:35:49.779 ] 00:35:49.779 }' 00:35:49.779 16:12:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:49.779 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:35:50.035 16:12:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:35:50.292 [2024-07-22 16:12:54.499571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:50.292 BaseBdev2 00:35:50.292 16:12:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:35:50.292 16:12:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:35:50.292 16:12:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:50.293 16:12:54 -- common/autotest_common.sh@889 -- # local i 00:35:50.293 16:12:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:50.293 16:12:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:50.293 16:12:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:50.550 16:12:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:50.807 [ 00:35:50.807 { 00:35:50.807 "name": "BaseBdev2", 00:35:50.807 "aliases": [ 00:35:50.807 "363d0107-923d-4f59-8423-fa3e35fc77e2" 00:35:50.807 ], 00:35:50.807 "product_name": "Malloc disk", 00:35:50.807 "block_size": 512, 00:35:50.807 "num_blocks": 65536, 00:35:50.807 "uuid": "363d0107-923d-4f59-8423-fa3e35fc77e2", 00:35:50.807 "assigned_rate_limits": { 00:35:50.807 "rw_ios_per_sec": 0, 00:35:50.807 "rw_mbytes_per_sec": 0, 00:35:50.807 "r_mbytes_per_sec": 0, 00:35:50.807 "w_mbytes_per_sec": 0 00:35:50.807 }, 00:35:50.807 "claimed": true, 00:35:50.807 "claim_type": "exclusive_write", 00:35:50.807 "zoned": false, 00:35:50.807 "supported_io_types": { 00:35:50.807 "read": true, 00:35:50.807 "write": true, 00:35:50.807 "unmap": true, 00:35:50.807 "write_zeroes": true, 00:35:50.807 "flush": true, 00:35:50.807 "reset": true, 00:35:50.807 "compare": false, 00:35:50.807 "compare_and_write": false, 00:35:50.807 "abort": true, 00:35:50.807 "nvme_admin": false, 00:35:50.807 "nvme_io": false 00:35:50.807 }, 00:35:50.807 "memory_domains": [ 00:35:50.807 { 00:35:50.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:50.807 "dma_device_type": 2 00:35:50.807 } 00:35:50.807 ], 00:35:50.807 "driver_specific": {} 00:35:50.807 } 00:35:50.807 ] 00:35:50.807 16:12:55 -- common/autotest_common.sh@895 -- # return 0 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.807 16:12:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.073 16:12:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:51.073 "name": "Existed_Raid", 00:35:51.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.073 "strip_size_kb": 64, 00:35:51.073 "state": "configuring", 00:35:51.073 "raid_level": "raid0", 00:35:51.073 "superblock": false, 00:35:51.073 "num_base_bdevs": 3, 00:35:51.073 "num_base_bdevs_discovered": 2, 00:35:51.073 "num_base_bdevs_operational": 3, 00:35:51.073 "base_bdevs_list": [ 00:35:51.073 { 00:35:51.073 "name": "BaseBdev1", 00:35:51.073 "uuid": "8bf63afe-55a0-4282-ac84-03a6805d6c68", 00:35:51.073 "is_configured": true, 00:35:51.073 "data_offset": 0, 00:35:51.073 "data_size": 65536 00:35:51.073 }, 00:35:51.073 { 00:35:51.073 "name": "BaseBdev2", 00:35:51.073 "uuid": "363d0107-923d-4f59-8423-fa3e35fc77e2", 00:35:51.073 "is_configured": true, 00:35:51.073 "data_offset": 0, 00:35:51.073 "data_size": 65536 00:35:51.073 }, 00:35:51.073 { 00:35:51.073 "name": "BaseBdev3", 00:35:51.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.073 "is_configured": false, 00:35:51.073 "data_offset": 0, 00:35:51.073 "data_size": 0 00:35:51.073 } 00:35:51.073 ] 00:35:51.073 }' 00:35:51.073 16:12:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:51.073 16:12:55 -- common/autotest_common.sh@10 -- # set +x 00:35:51.638 16:12:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:35:51.896 [2024-07-22 16:12:55.967530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:51.896 [2024-07-22 16:12:55.967595] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:35:51.896 [2024-07-22 16:12:55.967615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:51.896 [2024-07-22 16:12:55.967790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:35:51.896 [2024-07-22 16:12:55.968309] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:35:51.896 [2024-07-22 16:12:55.968342] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:35:51.896 [2024-07-22 16:12:55.968684] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:51.896 BaseBdev3 00:35:51.896 16:12:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:35:51.896 16:12:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:35:51.896 16:12:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:51.896 16:12:55 -- common/autotest_common.sh@889 -- # local i 00:35:51.896 16:12:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:51.896 16:12:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:51.896 16:12:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:52.154 16:12:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:52.411 [ 00:35:52.411 { 00:35:52.411 "name": "BaseBdev3", 00:35:52.411 "aliases": [ 00:35:52.411 "433f264a-d433-448a-8477-f20d02cb3dae" 00:35:52.411 ], 00:35:52.411 "product_name": "Malloc disk", 00:35:52.411 "block_size": 512, 00:35:52.411 "num_blocks": 65536, 00:35:52.411 "uuid": "433f264a-d433-448a-8477-f20d02cb3dae", 00:35:52.411 "assigned_rate_limits": { 00:35:52.411 "rw_ios_per_sec": 0, 00:35:52.411 "rw_mbytes_per_sec": 0, 00:35:52.411 "r_mbytes_per_sec": 0, 00:35:52.411 "w_mbytes_per_sec": 0 00:35:52.411 }, 00:35:52.411 "claimed": true, 00:35:52.411 "claim_type": "exclusive_write", 00:35:52.411 "zoned": false, 00:35:52.411 "supported_io_types": { 00:35:52.411 "read": true, 00:35:52.411 "write": true, 00:35:52.411 "unmap": true, 00:35:52.411 "write_zeroes": true, 00:35:52.411 "flush": true, 00:35:52.411 "reset": true, 00:35:52.411 "compare": false, 00:35:52.411 "compare_and_write": false, 00:35:52.411 "abort": true, 00:35:52.411 "nvme_admin": false, 00:35:52.411 "nvme_io": false 00:35:52.411 }, 00:35:52.411 "memory_domains": [ 00:35:52.411 { 00:35:52.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:52.411 "dma_device_type": 2 00:35:52.411 } 00:35:52.411 ], 00:35:52.411 "driver_specific": {} 00:35:52.411 } 00:35:52.411 ] 00:35:52.411 16:12:56 -- common/autotest_common.sh@895 -- # return 0 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:52.411 16:12:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:52.412 16:12:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.412 16:12:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.669 16:12:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:52.669 "name": "Existed_Raid", 00:35:52.669 "uuid": "2b245c95-a46f-45e1-95cf-e4bc7294e022", 00:35:52.669 "strip_size_kb": 64, 00:35:52.669 "state": "online", 00:35:52.669 "raid_level": "raid0", 00:35:52.669 "superblock": false, 00:35:52.669 "num_base_bdevs": 3, 00:35:52.669 "num_base_bdevs_discovered": 3, 00:35:52.669 "num_base_bdevs_operational": 3, 00:35:52.669 "base_bdevs_list": [ 00:35:52.669 { 00:35:52.669 "name": "BaseBdev1", 00:35:52.669 "uuid": "8bf63afe-55a0-4282-ac84-03a6805d6c68", 00:35:52.669 "is_configured": true, 00:35:52.669 "data_offset": 0, 00:35:52.669 "data_size": 65536 00:35:52.669 }, 00:35:52.669 { 00:35:52.669 "name": "BaseBdev2", 00:35:52.670 "uuid": "363d0107-923d-4f59-8423-fa3e35fc77e2", 00:35:52.670 "is_configured": true, 00:35:52.670 "data_offset": 0, 00:35:52.670 "data_size": 65536 00:35:52.670 }, 00:35:52.670 { 00:35:52.670 "name": "BaseBdev3", 00:35:52.670 "uuid": "433f264a-d433-448a-8477-f20d02cb3dae", 00:35:52.670 "is_configured": true, 00:35:52.670 "data_offset": 0, 00:35:52.670 "data_size": 65536 00:35:52.670 } 00:35:52.670 ] 00:35:52.670 }' 00:35:52.670 16:12:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:52.670 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:35:52.928 16:12:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:53.186 [2024-07-22 16:12:57.388238] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:53.186 [2024-07-22 16:12:57.388307] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:53.186 [2024-07-22 16:12:57.388391] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.445 16:12:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:53.703 16:12:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:53.703 "name": "Existed_Raid", 00:35:53.703 "uuid": "2b245c95-a46f-45e1-95cf-e4bc7294e022", 00:35:53.703 "strip_size_kb": 64, 00:35:53.703 "state": "offline", 00:35:53.703 "raid_level": "raid0", 00:35:53.703 "superblock": false, 00:35:53.703 "num_base_bdevs": 3, 00:35:53.703 "num_base_bdevs_discovered": 2, 00:35:53.703 "num_base_bdevs_operational": 2, 00:35:53.703 "base_bdevs_list": [ 00:35:53.703 { 00:35:53.703 "name": null, 00:35:53.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.703 "is_configured": false, 00:35:53.703 "data_offset": 0, 00:35:53.703 "data_size": 65536 00:35:53.703 }, 00:35:53.703 { 00:35:53.703 "name": "BaseBdev2", 00:35:53.703 "uuid": "363d0107-923d-4f59-8423-fa3e35fc77e2", 00:35:53.703 "is_configured": true, 00:35:53.703 "data_offset": 0, 00:35:53.703 "data_size": 65536 00:35:53.703 }, 00:35:53.703 { 00:35:53.703 "name": "BaseBdev3", 00:35:53.703 "uuid": "433f264a-d433-448a-8477-f20d02cb3dae", 00:35:53.703 "is_configured": true, 00:35:53.703 "data_offset": 0, 00:35:53.703 "data_size": 65536 00:35:53.703 } 00:35:53.703 ] 00:35:53.703 }' 00:35:53.703 16:12:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:53.703 16:12:57 -- common/autotest_common.sh@10 -- # set +x 00:35:53.961 16:12:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:35:53.961 16:12:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:53.961 16:12:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:53.961 16:12:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:54.220 16:12:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:54.220 16:12:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:54.220 16:12:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:54.477 [2024-07-22 16:12:58.525581] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:54.477 16:12:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:54.477 16:12:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:54.477 16:12:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.477 16:12:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:35:54.734 16:12:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:35:54.734 16:12:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:54.734 16:12:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:35:54.991 [2024-07-22 16:12:59.131174] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:54.991 [2024-07-22 16:12:59.131268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:35:54.991 16:12:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:35:54.991 16:12:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:35:54.991 16:12:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.991 16:12:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:35:55.249 16:12:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:35:55.249 16:12:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:35:55.249 16:12:59 -- bdev/bdev_raid.sh@287 -- # killprocess 72399 00:35:55.249 16:12:59 -- common/autotest_common.sh@926 -- # '[' -z 72399 ']' 00:35:55.249 16:12:59 -- common/autotest_common.sh@930 -- # kill -0 72399 00:35:55.249 16:12:59 -- common/autotest_common.sh@931 -- # uname 00:35:55.506 16:12:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:55.506 16:12:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72399 00:35:55.506 16:12:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:55.506 killing process with pid 72399 00:35:55.506 16:12:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:55.506 16:12:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72399' 00:35:55.506 16:12:59 -- common/autotest_common.sh@945 -- # kill 72399 00:35:55.506 [2024-07-22 16:12:59.543299] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:55.506 16:12:59 -- common/autotest_common.sh@950 -- # wait 72399 00:35:55.506 [2024-07-22 16:12:59.543484] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:35:56.905 00:35:56.905 real 0m11.665s 00:35:56.905 user 0m18.990s 00:35:56.905 sys 0m1.986s 00:35:56.905 16:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:56.905 ************************************ 00:35:56.905 END TEST raid_state_function_test 00:35:56.905 ************************************ 00:35:56.905 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:35:56.905 16:13:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:35:56.905 16:13:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:56.905 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:35:56.905 ************************************ 00:35:56.905 START TEST raid_state_function_test_sb 00:35:56.905 ************************************ 00:35:56.905 16:13:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:35:56.905 16:13:00 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=72756 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72756' 00:35:56.906 Process raid pid: 72756 00:35:56.906 16:13:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72756 /var/tmp/spdk-raid.sock 00:35:56.906 16:13:00 -- common/autotest_common.sh@819 -- # '[' -z 72756 ']' 00:35:56.906 16:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:56.906 16:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:56.906 16:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:56.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:56.906 16:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:56.906 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:35:56.906 [2024-07-22 16:13:01.004491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:56.906 [2024-07-22 16:13:01.004670] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:57.164 [2024-07-22 16:13:01.182010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.422 [2024-07-22 16:13:01.448767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.422 [2024-07-22 16:13:01.668503] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:57.681 16:13:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:57.681 16:13:01 -- common/autotest_common.sh@852 -- # return 0 00:35:57.681 16:13:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:35:57.939 [2024-07-22 16:13:02.120530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:57.939 [2024-07-22 16:13:02.120640] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:57.939 [2024-07-22 16:13:02.120656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:57.939 [2024-07-22 16:13:02.120674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:57.939 [2024-07-22 16:13:02.120683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:57.939 [2024-07-22 16:13:02.120698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.939 16:13:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:58.198 16:13:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:35:58.198 "name": "Existed_Raid", 00:35:58.198 "uuid": "606a1ad1-058e-4b18-8358-3613c8404230", 00:35:58.198 "strip_size_kb": 64, 00:35:58.198 "state": "configuring", 00:35:58.198 "raid_level": "raid0", 00:35:58.198 "superblock": true, 00:35:58.198 "num_base_bdevs": 3, 00:35:58.198 "num_base_bdevs_discovered": 0, 00:35:58.198 "num_base_bdevs_operational": 3, 00:35:58.198 "base_bdevs_list": [ 00:35:58.198 { 00:35:58.198 "name": "BaseBdev1", 00:35:58.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.198 "is_configured": false, 00:35:58.198 "data_offset": 0, 00:35:58.198 "data_size": 0 00:35:58.198 }, 00:35:58.198 { 00:35:58.198 "name": "BaseBdev2", 00:35:58.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.198 "is_configured": false, 00:35:58.198 "data_offset": 0, 00:35:58.198 "data_size": 0 00:35:58.198 }, 00:35:58.198 { 00:35:58.198 "name": "BaseBdev3", 00:35:58.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.198 "is_configured": false, 00:35:58.198 "data_offset": 0, 00:35:58.198 "data_size": 0 00:35:58.198 } 00:35:58.198 ] 00:35:58.198 }' 00:35:58.198 16:13:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:35:58.198 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:35:58.457 16:13:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:58.715 [2024-07-22 16:13:02.960709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:58.715 [2024-07-22 16:13:02.960790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:35:58.715 16:13:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:35:58.973 [2024-07-22 16:13:03.216821] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:58.973 [2024-07-22 16:13:03.216921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:58.973 [2024-07-22 16:13:03.216938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:58.973 [2024-07-22 16:13:03.216958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:58.973 [2024-07-22 16:13:03.216968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:58.973 [2024-07-22 16:13:03.216984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:58.973 16:13:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:35:59.540 [2024-07-22 16:13:03.513271] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:59.540 BaseBdev1 00:35:59.540 16:13:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:35:59.540 16:13:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:35:59.540 16:13:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:35:59.540 16:13:03 -- common/autotest_common.sh@889 -- # local i 00:35:59.540 16:13:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:35:59.540 16:13:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:35:59.540 16:13:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:59.540 16:13:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:59.798 [ 00:35:59.798 { 00:35:59.798 "name": "BaseBdev1", 00:35:59.798 "aliases": [ 00:35:59.798 "1aad4040-becf-4db1-9611-07f573e47882" 00:35:59.798 ], 00:35:59.798 "product_name": "Malloc disk", 00:35:59.798 "block_size": 512, 00:35:59.798 "num_blocks": 65536, 00:35:59.798 "uuid": "1aad4040-becf-4db1-9611-07f573e47882", 00:35:59.798 "assigned_rate_limits": { 00:35:59.798 "rw_ios_per_sec": 0, 00:35:59.798 "rw_mbytes_per_sec": 0, 00:35:59.798 "r_mbytes_per_sec": 0, 00:35:59.798 "w_mbytes_per_sec": 0 00:35:59.798 }, 00:35:59.798 "claimed": true, 00:35:59.798 "claim_type": "exclusive_write", 00:35:59.798 "zoned": false, 00:35:59.798 "supported_io_types": { 00:35:59.798 "read": true, 00:35:59.798 "write": true, 00:35:59.798 "unmap": true, 00:35:59.798 "write_zeroes": true, 00:35:59.798 "flush": true, 00:35:59.798 "reset": true, 00:35:59.799 "compare": false, 00:35:59.799 "compare_and_write": false, 00:35:59.799 "abort": true, 00:35:59.799 "nvme_admin": false, 00:35:59.799 "nvme_io": false 00:35:59.799 }, 00:35:59.799 "memory_domains": [ 00:35:59.799 { 00:35:59.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.799 "dma_device_type": 2 00:35:59.799 } 00:35:59.799 ], 00:35:59.799 "driver_specific": {} 00:35:59.799 } 00:35:59.799 ] 00:35:59.799 16:13:04 -- common/autotest_common.sh@895 -- # return 0 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:59.799 16:13:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.057 16:13:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:00.057 "name": "Existed_Raid", 00:36:00.057 "uuid": "26e4bc1c-63c3-4a0a-b098-e989eea0c83a", 00:36:00.057 "strip_size_kb": 64, 00:36:00.057 "state": "configuring", 00:36:00.057 "raid_level": "raid0", 00:36:00.057 "superblock": true, 00:36:00.057 "num_base_bdevs": 3, 00:36:00.057 "num_base_bdevs_discovered": 1, 00:36:00.057 "num_base_bdevs_operational": 3, 00:36:00.057 "base_bdevs_list": [ 00:36:00.057 { 00:36:00.057 "name": "BaseBdev1", 00:36:00.057 "uuid": "1aad4040-becf-4db1-9611-07f573e47882", 00:36:00.057 "is_configured": true, 00:36:00.057 "data_offset": 2048, 00:36:00.057 "data_size": 63488 00:36:00.057 }, 00:36:00.057 { 00:36:00.057 "name": "BaseBdev2", 00:36:00.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.057 "is_configured": false, 00:36:00.057 "data_offset": 0, 00:36:00.057 "data_size": 0 00:36:00.057 }, 00:36:00.057 { 00:36:00.057 "name": "BaseBdev3", 00:36:00.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.057 "is_configured": false, 00:36:00.057 "data_offset": 0, 00:36:00.057 "data_size": 0 00:36:00.057 } 00:36:00.057 ] 00:36:00.057 }' 00:36:00.057 16:13:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:00.057 16:13:04 -- common/autotest_common.sh@10 -- # set +x 00:36:00.622 16:13:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:00.880 [2024-07-22 16:13:04.897984] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:00.880 [2024-07-22 16:13:04.898099] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:36:00.880 16:13:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:36:00.880 16:13:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:01.138 16:13:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:01.396 BaseBdev1 00:36:01.396 16:13:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:36:01.396 16:13:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:36:01.396 16:13:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:01.396 16:13:05 -- common/autotest_common.sh@889 -- # local i 00:36:01.396 16:13:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:01.396 16:13:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:01.396 16:13:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:01.654 16:13:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:01.913 [ 00:36:01.913 { 00:36:01.913 "name": "BaseBdev1", 00:36:01.913 "aliases": [ 00:36:01.913 "0df99a25-28c3-423d-98cd-2344fadd4734" 00:36:01.913 ], 00:36:01.913 "product_name": "Malloc disk", 00:36:01.913 "block_size": 512, 00:36:01.913 "num_blocks": 65536, 00:36:01.913 "uuid": "0df99a25-28c3-423d-98cd-2344fadd4734", 00:36:01.913 "assigned_rate_limits": { 00:36:01.913 "rw_ios_per_sec": 0, 00:36:01.913 "rw_mbytes_per_sec": 0, 00:36:01.913 "r_mbytes_per_sec": 0, 00:36:01.913 "w_mbytes_per_sec": 0 00:36:01.913 }, 00:36:01.913 "claimed": false, 00:36:01.913 "zoned": false, 00:36:01.913 "supported_io_types": { 00:36:01.913 "read": true, 00:36:01.913 "write": true, 00:36:01.913 "unmap": true, 00:36:01.913 "write_zeroes": true, 00:36:01.913 "flush": true, 00:36:01.913 "reset": true, 00:36:01.913 "compare": false, 00:36:01.913 "compare_and_write": false, 00:36:01.913 "abort": true, 00:36:01.913 "nvme_admin": false, 00:36:01.913 "nvme_io": false 00:36:01.913 }, 00:36:01.913 "memory_domains": [ 00:36:01.913 { 00:36:01.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:01.913 "dma_device_type": 2 00:36:01.913 } 00:36:01.913 ], 00:36:01.913 "driver_specific": {} 00:36:01.913 } 00:36:01.913 ] 00:36:01.913 16:13:06 -- common/autotest_common.sh@895 -- # return 0 00:36:01.913 16:13:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:02.172 [2024-07-22 16:13:06.219582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:02.172 [2024-07-22 16:13:06.222349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:02.172 [2024-07-22 16:13:06.222421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:02.172 [2024-07-22 16:13:06.222437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:02.172 [2024-07-22 16:13:06.222453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.172 16:13:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:02.430 16:13:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:02.430 "name": "Existed_Raid", 00:36:02.430 "uuid": "070314d5-f60b-4aac-8032-128329207cba", 00:36:02.430 "strip_size_kb": 64, 00:36:02.430 "state": "configuring", 00:36:02.430 "raid_level": "raid0", 00:36:02.430 "superblock": true, 00:36:02.430 "num_base_bdevs": 3, 00:36:02.430 "num_base_bdevs_discovered": 1, 00:36:02.430 "num_base_bdevs_operational": 3, 00:36:02.430 "base_bdevs_list": [ 00:36:02.430 { 00:36:02.430 "name": "BaseBdev1", 00:36:02.430 "uuid": "0df99a25-28c3-423d-98cd-2344fadd4734", 00:36:02.430 "is_configured": true, 00:36:02.430 "data_offset": 2048, 00:36:02.430 "data_size": 63488 00:36:02.430 }, 00:36:02.430 { 00:36:02.430 "name": "BaseBdev2", 00:36:02.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.430 "is_configured": false, 00:36:02.430 "data_offset": 0, 00:36:02.430 "data_size": 0 00:36:02.430 }, 00:36:02.430 { 00:36:02.430 "name": "BaseBdev3", 00:36:02.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.430 "is_configured": false, 00:36:02.430 "data_offset": 0, 00:36:02.430 "data_size": 0 00:36:02.430 } 00:36:02.430 ] 00:36:02.430 }' 00:36:02.430 16:13:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:02.430 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:36:02.689 16:13:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:02.948 [2024-07-22 16:13:07.059203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:02.948 BaseBdev2 00:36:02.948 16:13:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:36:02.948 16:13:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:36:02.948 16:13:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:02.948 16:13:07 -- common/autotest_common.sh@889 -- # local i 00:36:02.948 16:13:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:02.948 16:13:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:02.948 16:13:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:03.207 16:13:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:03.465 [ 00:36:03.465 { 00:36:03.465 "name": "BaseBdev2", 00:36:03.465 "aliases": [ 00:36:03.465 "f690fc82-8786-40dc-94ef-6552a0245218" 00:36:03.465 ], 00:36:03.465 "product_name": "Malloc disk", 00:36:03.465 "block_size": 512, 00:36:03.465 "num_blocks": 65536, 00:36:03.465 "uuid": "f690fc82-8786-40dc-94ef-6552a0245218", 00:36:03.465 "assigned_rate_limits": { 00:36:03.465 "rw_ios_per_sec": 0, 00:36:03.465 "rw_mbytes_per_sec": 0, 00:36:03.465 "r_mbytes_per_sec": 0, 00:36:03.465 "w_mbytes_per_sec": 0 00:36:03.465 }, 00:36:03.465 "claimed": true, 00:36:03.465 "claim_type": "exclusive_write", 00:36:03.465 "zoned": false, 00:36:03.465 "supported_io_types": { 00:36:03.465 "read": true, 00:36:03.465 "write": true, 00:36:03.465 "unmap": true, 00:36:03.465 "write_zeroes": true, 00:36:03.465 "flush": true, 00:36:03.465 "reset": true, 00:36:03.465 "compare": false, 00:36:03.465 "compare_and_write": false, 00:36:03.465 "abort": true, 00:36:03.465 "nvme_admin": false, 00:36:03.465 "nvme_io": false 00:36:03.465 }, 00:36:03.465 "memory_domains": [ 00:36:03.465 { 00:36:03.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:03.465 "dma_device_type": 2 00:36:03.465 } 00:36:03.465 ], 00:36:03.465 "driver_specific": {} 00:36:03.465 } 00:36:03.465 ] 00:36:03.465 16:13:07 -- common/autotest_common.sh@895 -- # return 0 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.465 16:13:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:03.724 16:13:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:03.724 "name": "Existed_Raid", 00:36:03.724 "uuid": "070314d5-f60b-4aac-8032-128329207cba", 00:36:03.724 "strip_size_kb": 64, 00:36:03.724 "state": "configuring", 00:36:03.724 "raid_level": "raid0", 00:36:03.724 "superblock": true, 00:36:03.724 "num_base_bdevs": 3, 00:36:03.724 "num_base_bdevs_discovered": 2, 00:36:03.724 "num_base_bdevs_operational": 3, 00:36:03.724 "base_bdevs_list": [ 00:36:03.724 { 00:36:03.724 "name": "BaseBdev1", 00:36:03.724 "uuid": "0df99a25-28c3-423d-98cd-2344fadd4734", 00:36:03.724 "is_configured": true, 00:36:03.724 "data_offset": 2048, 00:36:03.724 "data_size": 63488 00:36:03.724 }, 00:36:03.724 { 00:36:03.724 "name": "BaseBdev2", 00:36:03.724 "uuid": "f690fc82-8786-40dc-94ef-6552a0245218", 00:36:03.724 "is_configured": true, 00:36:03.724 "data_offset": 2048, 00:36:03.724 "data_size": 63488 00:36:03.724 }, 00:36:03.724 { 00:36:03.724 "name": "BaseBdev3", 00:36:03.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:03.724 "is_configured": false, 00:36:03.724 "data_offset": 0, 00:36:03.724 "data_size": 0 00:36:03.724 } 00:36:03.724 ] 00:36:03.724 }' 00:36:03.724 16:13:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:03.724 16:13:07 -- common/autotest_common.sh@10 -- # set +x 00:36:03.982 16:13:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:04.240 [2024-07-22 16:13:08.357384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:04.240 [2024-07-22 16:13:08.357646] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:36:04.240 [2024-07-22 16:13:08.357671] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:04.240 [2024-07-22 16:13:08.357790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:36:04.240 [2024-07-22 16:13:08.358273] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:36:04.240 [2024-07-22 16:13:08.358292] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:36:04.240 [2024-07-22 16:13:08.358488] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:04.240 BaseBdev3 00:36:04.240 16:13:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:36:04.240 16:13:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:36:04.240 16:13:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:04.240 16:13:08 -- common/autotest_common.sh@889 -- # local i 00:36:04.240 16:13:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:04.240 16:13:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:04.240 16:13:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:04.498 16:13:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:04.757 [ 00:36:04.757 { 00:36:04.757 "name": "BaseBdev3", 00:36:04.757 "aliases": [ 00:36:04.757 "e46b8437-0d04-4086-9b83-8327cdde431c" 00:36:04.757 ], 00:36:04.757 "product_name": "Malloc disk", 00:36:04.757 "block_size": 512, 00:36:04.757 "num_blocks": 65536, 00:36:04.757 "uuid": "e46b8437-0d04-4086-9b83-8327cdde431c", 00:36:04.757 "assigned_rate_limits": { 00:36:04.757 "rw_ios_per_sec": 0, 00:36:04.757 "rw_mbytes_per_sec": 0, 00:36:04.757 "r_mbytes_per_sec": 0, 00:36:04.757 "w_mbytes_per_sec": 0 00:36:04.757 }, 00:36:04.757 "claimed": true, 00:36:04.757 "claim_type": "exclusive_write", 00:36:04.757 "zoned": false, 00:36:04.757 "supported_io_types": { 00:36:04.757 "read": true, 00:36:04.757 "write": true, 00:36:04.757 "unmap": true, 00:36:04.757 "write_zeroes": true, 00:36:04.757 "flush": true, 00:36:04.757 "reset": true, 00:36:04.757 "compare": false, 00:36:04.757 "compare_and_write": false, 00:36:04.757 "abort": true, 00:36:04.757 "nvme_admin": false, 00:36:04.757 "nvme_io": false 00:36:04.757 }, 00:36:04.757 "memory_domains": [ 00:36:04.757 { 00:36:04.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:04.757 "dma_device_type": 2 00:36:04.757 } 00:36:04.757 ], 00:36:04.757 "driver_specific": {} 00:36:04.757 } 00:36:04.757 ] 00:36:04.757 16:13:08 -- common/autotest_common.sh@895 -- # return 0 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.757 16:13:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.015 16:13:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:05.015 "name": "Existed_Raid", 00:36:05.015 "uuid": "070314d5-f60b-4aac-8032-128329207cba", 00:36:05.015 "strip_size_kb": 64, 00:36:05.015 "state": "online", 00:36:05.015 "raid_level": "raid0", 00:36:05.015 "superblock": true, 00:36:05.015 "num_base_bdevs": 3, 00:36:05.015 "num_base_bdevs_discovered": 3, 00:36:05.015 "num_base_bdevs_operational": 3, 00:36:05.015 "base_bdevs_list": [ 00:36:05.015 { 00:36:05.015 "name": "BaseBdev1", 00:36:05.015 "uuid": "0df99a25-28c3-423d-98cd-2344fadd4734", 00:36:05.015 "is_configured": true, 00:36:05.015 "data_offset": 2048, 00:36:05.015 "data_size": 63488 00:36:05.015 }, 00:36:05.015 { 00:36:05.015 "name": "BaseBdev2", 00:36:05.015 "uuid": "f690fc82-8786-40dc-94ef-6552a0245218", 00:36:05.015 "is_configured": true, 00:36:05.015 "data_offset": 2048, 00:36:05.015 "data_size": 63488 00:36:05.015 }, 00:36:05.015 { 00:36:05.015 "name": "BaseBdev3", 00:36:05.015 "uuid": "e46b8437-0d04-4086-9b83-8327cdde431c", 00:36:05.015 "is_configured": true, 00:36:05.015 "data_offset": 2048, 00:36:05.015 "data_size": 63488 00:36:05.015 } 00:36:05.015 ] 00:36:05.015 }' 00:36:05.015 16:13:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:05.015 16:13:09 -- common/autotest_common.sh@10 -- # set +x 00:36:05.273 16:13:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:05.598 [2024-07-22 16:13:09.605999] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:05.598 [2024-07-22 16:13:09.606055] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:05.598 [2024-07-22 16:13:09.606128] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.598 16:13:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.855 16:13:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:05.855 "name": "Existed_Raid", 00:36:05.855 "uuid": "070314d5-f60b-4aac-8032-128329207cba", 00:36:05.855 "strip_size_kb": 64, 00:36:05.855 "state": "offline", 00:36:05.855 "raid_level": "raid0", 00:36:05.855 "superblock": true, 00:36:05.855 "num_base_bdevs": 3, 00:36:05.855 "num_base_bdevs_discovered": 2, 00:36:05.855 "num_base_bdevs_operational": 2, 00:36:05.855 "base_bdevs_list": [ 00:36:05.855 { 00:36:05.855 "name": null, 00:36:05.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.855 "is_configured": false, 00:36:05.855 "data_offset": 2048, 00:36:05.855 "data_size": 63488 00:36:05.855 }, 00:36:05.855 { 00:36:05.855 "name": "BaseBdev2", 00:36:05.855 "uuid": "f690fc82-8786-40dc-94ef-6552a0245218", 00:36:05.855 "is_configured": true, 00:36:05.855 "data_offset": 2048, 00:36:05.855 "data_size": 63488 00:36:05.855 }, 00:36:05.855 { 00:36:05.855 "name": "BaseBdev3", 00:36:05.855 "uuid": "e46b8437-0d04-4086-9b83-8327cdde431c", 00:36:05.855 "is_configured": true, 00:36:05.855 "data_offset": 2048, 00:36:05.855 "data_size": 63488 00:36:05.855 } 00:36:05.855 ] 00:36:05.855 }' 00:36:05.855 16:13:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:05.855 16:13:09 -- common/autotest_common.sh@10 -- # set +x 00:36:06.113 16:13:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:36:06.113 16:13:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:06.113 16:13:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.113 16:13:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:06.371 16:13:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:06.371 16:13:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:06.371 16:13:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:06.629 [2024-07-22 16:13:10.794863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:06.886 16:13:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:06.886 16:13:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:06.886 16:13:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.886 16:13:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:07.144 16:13:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:07.144 16:13:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:07.144 16:13:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:07.144 [2024-07-22 16:13:11.400498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:07.144 [2024-07-22 16:13:11.400593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:36:07.401 16:13:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:07.401 16:13:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:07.401 16:13:11 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:07.401 16:13:11 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:36:07.659 16:13:11 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:36:07.659 16:13:11 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:36:07.659 16:13:11 -- bdev/bdev_raid.sh@287 -- # killprocess 72756 00:36:07.659 16:13:11 -- common/autotest_common.sh@926 -- # '[' -z 72756 ']' 00:36:07.659 16:13:11 -- common/autotest_common.sh@930 -- # kill -0 72756 00:36:07.659 16:13:11 -- common/autotest_common.sh@931 -- # uname 00:36:07.659 16:13:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:07.659 16:13:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72756 00:36:07.659 killing process with pid 72756 00:36:07.659 16:13:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:07.659 16:13:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:07.659 16:13:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72756' 00:36:07.659 16:13:11 -- common/autotest_common.sh@945 -- # kill 72756 00:36:07.659 16:13:11 -- common/autotest_common.sh@950 -- # wait 72756 00:36:07.659 [2024-07-22 16:13:11.823808] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:07.659 [2024-07-22 16:13:11.823959] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:09.032 ************************************ 00:36:09.032 END TEST raid_state_function_test_sb 00:36:09.032 ************************************ 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:09.032 00:36:09.032 real 0m12.188s 00:36:09.032 user 0m19.945s 00:36:09.032 sys 0m1.996s 00:36:09.032 16:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.032 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:36:09.032 16:13:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:36:09.032 16:13:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:09.032 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:36:09.032 ************************************ 00:36:09.032 START TEST raid_superblock_test 00:36:09.032 ************************************ 00:36:09.032 16:13:13 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=73121 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:09.032 16:13:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 73121 /var/tmp/spdk-raid.sock 00:36:09.032 16:13:13 -- common/autotest_common.sh@819 -- # '[' -z 73121 ']' 00:36:09.032 16:13:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:09.032 16:13:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:09.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:09.033 16:13:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:09.033 16:13:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:09.033 16:13:13 -- common/autotest_common.sh@10 -- # set +x 00:36:09.033 [2024-07-22 16:13:13.241497] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:36:09.033 [2024-07-22 16:13:13.242007] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73121 ] 00:36:09.290 [2024-07-22 16:13:13.419441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.548 [2024-07-22 16:13:13.684417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.805 [2024-07-22 16:13:13.890812] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:10.064 16:13:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:10.064 16:13:14 -- common/autotest_common.sh@852 -- # return 0 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:10.064 16:13:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:36:10.322 malloc1 00:36:10.322 16:13:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:10.580 [2024-07-22 16:13:14.648347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:10.580 [2024-07-22 16:13:14.648498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:10.580 [2024-07-22 16:13:14.648567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:36:10.580 [2024-07-22 16:13:14.648584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:10.580 [2024-07-22 16:13:14.651753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:10.580 [2024-07-22 16:13:14.651801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:10.580 pt1 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:10.580 16:13:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:36:10.839 malloc2 00:36:10.839 16:13:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:11.097 [2024-07-22 16:13:15.123415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:11.097 [2024-07-22 16:13:15.123491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:11.097 [2024-07-22 16:13:15.123527] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:36:11.097 [2024-07-22 16:13:15.123544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:11.097 [2024-07-22 16:13:15.126660] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:11.097 [2024-07-22 16:13:15.126704] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:11.097 pt2 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:11.097 16:13:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:36:11.355 malloc3 00:36:11.355 16:13:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:11.355 [2024-07-22 16:13:15.611740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:11.355 [2024-07-22 16:13:15.612165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:11.355 [2024-07-22 16:13:15.612226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:36:11.355 [2024-07-22 16:13:15.612245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:11.355 [2024-07-22 16:13:15.615274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:11.355 [2024-07-22 16:13:15.615343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:11.355 pt3 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:36:11.613 [2024-07-22 16:13:15.839906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:11.613 [2024-07-22 16:13:15.842481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:11.613 [2024-07-22 16:13:15.842759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:11.613 [2024-07-22 16:13:15.843041] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:36:11.613 [2024-07-22 16:13:15.843070] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:11.613 [2024-07-22 16:13:15.843235] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:36:11.613 [2024-07-22 16:13:15.843750] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:36:11.613 [2024-07-22 16:13:15.843771] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:36:11.613 [2024-07-22 16:13:15.844094] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.613 16:13:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.871 16:13:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:11.871 "name": "raid_bdev1", 00:36:11.871 "uuid": "e66c63f5-cf87-4309-8a74-6a0fc380e450", 00:36:11.871 "strip_size_kb": 64, 00:36:11.871 "state": "online", 00:36:11.871 "raid_level": "raid0", 00:36:11.871 "superblock": true, 00:36:11.871 "num_base_bdevs": 3, 00:36:11.871 "num_base_bdevs_discovered": 3, 00:36:11.871 "num_base_bdevs_operational": 3, 00:36:11.871 "base_bdevs_list": [ 00:36:11.871 { 00:36:11.871 "name": "pt1", 00:36:11.871 "uuid": "00938bc2-33c5-5651-81fa-eaa8ef7bf5f0", 00:36:11.871 "is_configured": true, 00:36:11.871 "data_offset": 2048, 00:36:11.871 "data_size": 63488 00:36:11.871 }, 00:36:11.871 { 00:36:11.871 "name": "pt2", 00:36:11.871 "uuid": "0c46d17d-c2b2-566e-920c-edea66df0b19", 00:36:11.871 "is_configured": true, 00:36:11.871 "data_offset": 2048, 00:36:11.871 "data_size": 63488 00:36:11.871 }, 00:36:11.871 { 00:36:11.871 "name": "pt3", 00:36:11.871 "uuid": "033bb313-2d4d-52bb-90bc-bd3680df22f3", 00:36:11.871 "is_configured": true, 00:36:11.871 "data_offset": 2048, 00:36:11.871 "data_size": 63488 00:36:11.871 } 00:36:11.871 ] 00:36:11.871 }' 00:36:11.871 16:13:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:11.871 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:36:12.437 16:13:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:36:12.437 16:13:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:12.437 [2024-07-22 16:13:16.648578] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:12.437 16:13:16 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e66c63f5-cf87-4309-8a74-6a0fc380e450 00:36:12.437 16:13:16 -- bdev/bdev_raid.sh@380 -- # '[' -z e66c63f5-cf87-4309-8a74-6a0fc380e450 ']' 00:36:12.437 16:13:16 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:12.695 [2024-07-22 16:13:16.908392] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:12.695 [2024-07-22 16:13:16.908445] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:12.695 [2024-07-22 16:13:16.908555] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:12.695 [2024-07-22 16:13:16.908636] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:12.695 [2024-07-22 16:13:16.908656] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:36:12.695 16:13:16 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.695 16:13:16 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:36:12.952 16:13:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:36:12.953 16:13:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:36:12.953 16:13:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:12.953 16:13:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:13.210 16:13:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:13.210 16:13:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:13.776 16:13:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:13.776 16:13:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:13.776 16:13:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:13.776 16:13:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:14.035 16:13:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:36:14.035 16:13:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:14.035 16:13:18 -- common/autotest_common.sh@640 -- # local es=0 00:36:14.035 16:13:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:14.035 16:13:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:14.035 16:13:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:14.035 16:13:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:14.035 16:13:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:14.035 16:13:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:14.035 16:13:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:14.035 16:13:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:14.035 16:13:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:14.035 16:13:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:14.293 [2024-07-22 16:13:18.500859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:14.293 [2024-07-22 16:13:18.503595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:14.293 [2024-07-22 16:13:18.503677] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:14.293 [2024-07-22 16:13:18.503755] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:36:14.293 [2024-07-22 16:13:18.503845] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:36:14.293 [2024-07-22 16:13:18.503882] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:36:14.293 [2024-07-22 16:13:18.503906] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:14.293 [2024-07-22 16:13:18.503944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:36:14.293 request: 00:36:14.293 { 00:36:14.293 "name": "raid_bdev1", 00:36:14.293 "raid_level": "raid0", 00:36:14.293 "base_bdevs": [ 00:36:14.293 "malloc1", 00:36:14.293 "malloc2", 00:36:14.293 "malloc3" 00:36:14.293 ], 00:36:14.293 "superblock": false, 00:36:14.293 "strip_size_kb": 64, 00:36:14.293 "method": "bdev_raid_create", 00:36:14.293 "req_id": 1 00:36:14.293 } 00:36:14.293 Got JSON-RPC error response 00:36:14.293 response: 00:36:14.293 { 00:36:14.293 "code": -17, 00:36:14.293 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:14.293 } 00:36:14.293 16:13:18 -- common/autotest_common.sh@643 -- # es=1 00:36:14.293 16:13:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:36:14.293 16:13:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:36:14.293 16:13:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:36:14.293 16:13:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:14.293 16:13:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:36:14.552 16:13:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:36:14.552 16:13:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:36:14.552 16:13:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:14.811 [2024-07-22 16:13:19.024945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:14.811 [2024-07-22 16:13:19.025068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:14.811 [2024-07-22 16:13:19.025122] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:36:14.811 [2024-07-22 16:13:19.025157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:14.811 [2024-07-22 16:13:19.028210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:14.811 [2024-07-22 16:13:19.028260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:14.811 [2024-07-22 16:13:19.028386] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:14.811 [2024-07-22 16:13:19.028466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:14.811 pt1 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.811 16:13:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.069 16:13:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:15.069 "name": "raid_bdev1", 00:36:15.069 "uuid": "e66c63f5-cf87-4309-8a74-6a0fc380e450", 00:36:15.069 "strip_size_kb": 64, 00:36:15.069 "state": "configuring", 00:36:15.069 "raid_level": "raid0", 00:36:15.069 "superblock": true, 00:36:15.069 "num_base_bdevs": 3, 00:36:15.069 "num_base_bdevs_discovered": 1, 00:36:15.069 "num_base_bdevs_operational": 3, 00:36:15.069 "base_bdevs_list": [ 00:36:15.069 { 00:36:15.069 "name": "pt1", 00:36:15.069 "uuid": "00938bc2-33c5-5651-81fa-eaa8ef7bf5f0", 00:36:15.069 "is_configured": true, 00:36:15.069 "data_offset": 2048, 00:36:15.069 "data_size": 63488 00:36:15.069 }, 00:36:15.069 { 00:36:15.069 "name": null, 00:36:15.069 "uuid": "0c46d17d-c2b2-566e-920c-edea66df0b19", 00:36:15.069 "is_configured": false, 00:36:15.069 "data_offset": 2048, 00:36:15.069 "data_size": 63488 00:36:15.069 }, 00:36:15.069 { 00:36:15.069 "name": null, 00:36:15.069 "uuid": "033bb313-2d4d-52bb-90bc-bd3680df22f3", 00:36:15.069 "is_configured": false, 00:36:15.069 "data_offset": 2048, 00:36:15.069 "data_size": 63488 00:36:15.069 } 00:36:15.069 ] 00:36:15.069 }' 00:36:15.069 16:13:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:15.069 16:13:19 -- common/autotest_common.sh@10 -- # set +x 00:36:15.635 16:13:19 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:36:15.635 16:13:19 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:15.635 [2024-07-22 16:13:19.881243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:15.635 [2024-07-22 16:13:19.881363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.635 [2024-07-22 16:13:19.881400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:36:15.635 [2024-07-22 16:13:19.881419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.635 [2024-07-22 16:13:19.881987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.635 [2024-07-22 16:13:19.882040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:15.635 [2024-07-22 16:13:19.882168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:15.635 [2024-07-22 16:13:19.882220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:15.635 pt2 00:36:15.635 16:13:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:15.892 [2024-07-22 16:13:20.137380] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.892 16:13:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.462 16:13:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:16.462 "name": "raid_bdev1", 00:36:16.462 "uuid": "e66c63f5-cf87-4309-8a74-6a0fc380e450", 00:36:16.462 "strip_size_kb": 64, 00:36:16.462 "state": "configuring", 00:36:16.462 "raid_level": "raid0", 00:36:16.462 "superblock": true, 00:36:16.462 "num_base_bdevs": 3, 00:36:16.462 "num_base_bdevs_discovered": 1, 00:36:16.462 "num_base_bdevs_operational": 3, 00:36:16.462 "base_bdevs_list": [ 00:36:16.462 { 00:36:16.462 "name": "pt1", 00:36:16.462 "uuid": "00938bc2-33c5-5651-81fa-eaa8ef7bf5f0", 00:36:16.462 "is_configured": true, 00:36:16.462 "data_offset": 2048, 00:36:16.462 "data_size": 63488 00:36:16.462 }, 00:36:16.462 { 00:36:16.462 "name": null, 00:36:16.462 "uuid": "0c46d17d-c2b2-566e-920c-edea66df0b19", 00:36:16.462 "is_configured": false, 00:36:16.462 "data_offset": 2048, 00:36:16.462 "data_size": 63488 00:36:16.462 }, 00:36:16.462 { 00:36:16.462 "name": null, 00:36:16.462 "uuid": "033bb313-2d4d-52bb-90bc-bd3680df22f3", 00:36:16.462 "is_configured": false, 00:36:16.462 "data_offset": 2048, 00:36:16.462 "data_size": 63488 00:36:16.462 } 00:36:16.462 ] 00:36:16.462 }' 00:36:16.462 16:13:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:16.462 16:13:20 -- common/autotest_common.sh@10 -- # set +x 00:36:16.462 16:13:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:36:16.462 16:13:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:16.462 16:13:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:17.030 [2024-07-22 16:13:20.993686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:17.030 [2024-07-22 16:13:20.993785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:17.030 [2024-07-22 16:13:20.993830] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:36:17.030 [2024-07-22 16:13:20.993845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:17.030 [2024-07-22 16:13:20.995276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:17.030 [2024-07-22 16:13:20.995316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:17.030 [2024-07-22 16:13:20.995450] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:17.030 [2024-07-22 16:13:20.995482] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:17.030 pt2 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:17.030 [2024-07-22 16:13:21.221781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:17.030 [2024-07-22 16:13:21.222398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:17.030 [2024-07-22 16:13:21.222743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:36:17.030 [2024-07-22 16:13:21.223071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:17.030 [2024-07-22 16:13:21.223958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:17.030 [2024-07-22 16:13:21.224293] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:17.030 [2024-07-22 16:13:21.224691] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:17.030 [2024-07-22 16:13:21.224914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:17.030 [2024-07-22 16:13:21.225298] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:36:17.030 pt3 00:36:17.030 [2024-07-22 16:13:21.225500] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:17.030 [2024-07-22 16:13:21.225638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:36:17.030 [2024-07-22 16:13:21.226045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:36:17.030 [2024-07-22 16:13:21.226068] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:36:17.030 [2024-07-22 16:13:21.226266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.030 16:13:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.599 16:13:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:17.599 "name": "raid_bdev1", 00:36:17.599 "uuid": "e66c63f5-cf87-4309-8a74-6a0fc380e450", 00:36:17.599 "strip_size_kb": 64, 00:36:17.599 "state": "online", 00:36:17.599 "raid_level": "raid0", 00:36:17.599 "superblock": true, 00:36:17.599 "num_base_bdevs": 3, 00:36:17.599 "num_base_bdevs_discovered": 3, 00:36:17.599 "num_base_bdevs_operational": 3, 00:36:17.599 "base_bdevs_list": [ 00:36:17.599 { 00:36:17.599 "name": "pt1", 00:36:17.599 "uuid": "00938bc2-33c5-5651-81fa-eaa8ef7bf5f0", 00:36:17.599 "is_configured": true, 00:36:17.599 "data_offset": 2048, 00:36:17.599 "data_size": 63488 00:36:17.599 }, 00:36:17.599 { 00:36:17.599 "name": "pt2", 00:36:17.599 "uuid": "0c46d17d-c2b2-566e-920c-edea66df0b19", 00:36:17.599 "is_configured": true, 00:36:17.599 "data_offset": 2048, 00:36:17.599 "data_size": 63488 00:36:17.599 }, 00:36:17.599 { 00:36:17.599 "name": "pt3", 00:36:17.599 "uuid": "033bb313-2d4d-52bb-90bc-bd3680df22f3", 00:36:17.599 "is_configured": true, 00:36:17.599 "data_offset": 2048, 00:36:17.599 "data_size": 63488 00:36:17.599 } 00:36:17.599 ] 00:36:17.599 }' 00:36:17.599 16:13:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:17.599 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:36:17.857 16:13:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:36:17.857 16:13:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:18.115 [2024-07-22 16:13:22.178834] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:18.115 16:13:22 -- bdev/bdev_raid.sh@430 -- # '[' e66c63f5-cf87-4309-8a74-6a0fc380e450 '!=' e66c63f5-cf87-4309-8a74-6a0fc380e450 ']' 00:36:18.115 16:13:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:36:18.115 16:13:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:18.115 16:13:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:36:18.115 16:13:22 -- bdev/bdev_raid.sh@511 -- # killprocess 73121 00:36:18.115 16:13:22 -- common/autotest_common.sh@926 -- # '[' -z 73121 ']' 00:36:18.115 16:13:22 -- common/autotest_common.sh@930 -- # kill -0 73121 00:36:18.115 16:13:22 -- common/autotest_common.sh@931 -- # uname 00:36:18.115 16:13:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:18.115 16:13:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73121 00:36:18.115 killing process with pid 73121 00:36:18.115 16:13:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:18.115 16:13:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:18.115 16:13:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73121' 00:36:18.115 16:13:22 -- common/autotest_common.sh@945 -- # kill 73121 00:36:18.115 16:13:22 -- common/autotest_common.sh@950 -- # wait 73121 00:36:18.115 [2024-07-22 16:13:22.238177] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:18.115 [2024-07-22 16:13:22.238312] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:18.115 [2024-07-22 16:13:22.238386] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:18.115 [2024-07-22 16:13:22.238406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:36:18.373 [2024-07-22 16:13:22.519680] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:36:19.748 00:36:19.748 real 0m10.642s 00:36:19.748 user 0m17.232s 00:36:19.748 sys 0m1.698s 00:36:19.748 16:13:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:19.748 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:36:19.748 ************************************ 00:36:19.748 END TEST raid_superblock_test 00:36:19.748 ************************************ 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:36:19.748 16:13:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:36:19.748 16:13:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:19.748 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:36:19.748 ************************************ 00:36:19.748 START TEST raid_state_function_test 00:36:19.748 ************************************ 00:36:19.748 16:13:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:19.748 Process raid pid: 73408 00:36:19.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=73408 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73408' 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73408 /var/tmp/spdk-raid.sock 00:36:19.748 16:13:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:19.748 16:13:23 -- common/autotest_common.sh@819 -- # '[' -z 73408 ']' 00:36:19.748 16:13:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:19.748 16:13:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:19.748 16:13:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:19.748 16:13:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:19.748 16:13:23 -- common/autotest_common.sh@10 -- # set +x 00:36:19.748 [2024-07-22 16:13:23.950000] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:36:19.748 [2024-07-22 16:13:23.950487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:20.007 [2024-07-22 16:13:24.131757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.266 [2024-07-22 16:13:24.416306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.524 [2024-07-22 16:13:24.628892] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:20.782 16:13:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:20.782 16:13:24 -- common/autotest_common.sh@852 -- # return 0 00:36:20.782 16:13:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:21.041 [2024-07-22 16:13:25.063510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:21.041 [2024-07-22 16:13:25.063858] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:21.041 [2024-07-22 16:13:25.063980] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:21.041 [2024-07-22 16:13:25.064063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:21.041 [2024-07-22 16:13:25.064173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:21.041 [2024-07-22 16:13:25.064231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.041 16:13:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:21.299 16:13:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:21.299 "name": "Existed_Raid", 00:36:21.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.299 "strip_size_kb": 64, 00:36:21.299 "state": "configuring", 00:36:21.299 "raid_level": "concat", 00:36:21.299 "superblock": false, 00:36:21.299 "num_base_bdevs": 3, 00:36:21.299 "num_base_bdevs_discovered": 0, 00:36:21.299 "num_base_bdevs_operational": 3, 00:36:21.299 "base_bdevs_list": [ 00:36:21.299 { 00:36:21.299 "name": "BaseBdev1", 00:36:21.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.299 "is_configured": false, 00:36:21.299 "data_offset": 0, 00:36:21.299 "data_size": 0 00:36:21.299 }, 00:36:21.299 { 00:36:21.299 "name": "BaseBdev2", 00:36:21.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.299 "is_configured": false, 00:36:21.299 "data_offset": 0, 00:36:21.299 "data_size": 0 00:36:21.300 }, 00:36:21.300 { 00:36:21.300 "name": "BaseBdev3", 00:36:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.300 "is_configured": false, 00:36:21.300 "data_offset": 0, 00:36:21.300 "data_size": 0 00:36:21.300 } 00:36:21.300 ] 00:36:21.300 }' 00:36:21.300 16:13:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:21.300 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:36:21.558 16:13:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:21.817 [2024-07-22 16:13:25.947628] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:21.817 [2024-07-22 16:13:25.947965] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:36:21.817 16:13:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:22.075 [2024-07-22 16:13:26.227750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:22.075 [2024-07-22 16:13:26.228069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:22.075 [2024-07-22 16:13:26.228194] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:22.075 [2024-07-22 16:13:26.228259] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:22.075 [2024-07-22 16:13:26.228362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:22.075 [2024-07-22 16:13:26.228419] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:22.075 16:13:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:22.333 [2024-07-22 16:13:26.486734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:22.333 BaseBdev1 00:36:22.333 16:13:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:36:22.333 16:13:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:36:22.333 16:13:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:22.333 16:13:26 -- common/autotest_common.sh@889 -- # local i 00:36:22.333 16:13:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:22.333 16:13:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:22.333 16:13:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:22.592 16:13:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:22.851 [ 00:36:22.851 { 00:36:22.851 "name": "BaseBdev1", 00:36:22.851 "aliases": [ 00:36:22.851 "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36" 00:36:22.851 ], 00:36:22.851 "product_name": "Malloc disk", 00:36:22.851 "block_size": 512, 00:36:22.851 "num_blocks": 65536, 00:36:22.851 "uuid": "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36", 00:36:22.851 "assigned_rate_limits": { 00:36:22.851 "rw_ios_per_sec": 0, 00:36:22.851 "rw_mbytes_per_sec": 0, 00:36:22.851 "r_mbytes_per_sec": 0, 00:36:22.851 "w_mbytes_per_sec": 0 00:36:22.851 }, 00:36:22.851 "claimed": true, 00:36:22.851 "claim_type": "exclusive_write", 00:36:22.851 "zoned": false, 00:36:22.851 "supported_io_types": { 00:36:22.851 "read": true, 00:36:22.851 "write": true, 00:36:22.851 "unmap": true, 00:36:22.851 "write_zeroes": true, 00:36:22.851 "flush": true, 00:36:22.851 "reset": true, 00:36:22.851 "compare": false, 00:36:22.851 "compare_and_write": false, 00:36:22.851 "abort": true, 00:36:22.851 "nvme_admin": false, 00:36:22.851 "nvme_io": false 00:36:22.851 }, 00:36:22.851 "memory_domains": [ 00:36:22.851 { 00:36:22.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.851 "dma_device_type": 2 00:36:22.851 } 00:36:22.851 ], 00:36:22.851 "driver_specific": {} 00:36:22.851 } 00:36:22.851 ] 00:36:22.851 16:13:27 -- common/autotest_common.sh@895 -- # return 0 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.851 16:13:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:23.110 16:13:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:23.110 "name": "Existed_Raid", 00:36:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.110 "strip_size_kb": 64, 00:36:23.110 "state": "configuring", 00:36:23.110 "raid_level": "concat", 00:36:23.110 "superblock": false, 00:36:23.110 "num_base_bdevs": 3, 00:36:23.110 "num_base_bdevs_discovered": 1, 00:36:23.110 "num_base_bdevs_operational": 3, 00:36:23.110 "base_bdevs_list": [ 00:36:23.110 { 00:36:23.110 "name": "BaseBdev1", 00:36:23.110 "uuid": "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36", 00:36:23.110 "is_configured": true, 00:36:23.110 "data_offset": 0, 00:36:23.110 "data_size": 65536 00:36:23.110 }, 00:36:23.110 { 00:36:23.110 "name": "BaseBdev2", 00:36:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.110 "is_configured": false, 00:36:23.110 "data_offset": 0, 00:36:23.110 "data_size": 0 00:36:23.110 }, 00:36:23.110 { 00:36:23.110 "name": "BaseBdev3", 00:36:23.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.110 "is_configured": false, 00:36:23.110 "data_offset": 0, 00:36:23.110 "data_size": 0 00:36:23.110 } 00:36:23.110 ] 00:36:23.110 }' 00:36:23.110 16:13:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:23.110 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:36:23.368 16:13:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:23.627 [2024-07-22 16:13:27.867262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:23.627 [2024-07-22 16:13:27.867363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:36:23.627 16:13:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:36:23.627 16:13:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:23.885 [2024-07-22 16:13:28.139518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:23.885 [2024-07-22 16:13:28.142161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:23.885 [2024-07-22 16:13:28.142252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:23.885 [2024-07-22 16:13:28.142271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:23.885 [2024-07-22 16:13:28.142290] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:24.143 "name": "Existed_Raid", 00:36:24.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.143 "strip_size_kb": 64, 00:36:24.143 "state": "configuring", 00:36:24.143 "raid_level": "concat", 00:36:24.143 "superblock": false, 00:36:24.143 "num_base_bdevs": 3, 00:36:24.143 "num_base_bdevs_discovered": 1, 00:36:24.143 "num_base_bdevs_operational": 3, 00:36:24.143 "base_bdevs_list": [ 00:36:24.143 { 00:36:24.143 "name": "BaseBdev1", 00:36:24.143 "uuid": "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36", 00:36:24.143 "is_configured": true, 00:36:24.143 "data_offset": 0, 00:36:24.143 "data_size": 65536 00:36:24.143 }, 00:36:24.143 { 00:36:24.143 "name": "BaseBdev2", 00:36:24.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.143 "is_configured": false, 00:36:24.143 "data_offset": 0, 00:36:24.143 "data_size": 0 00:36:24.143 }, 00:36:24.143 { 00:36:24.143 "name": "BaseBdev3", 00:36:24.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:24.143 "is_configured": false, 00:36:24.143 "data_offset": 0, 00:36:24.143 "data_size": 0 00:36:24.143 } 00:36:24.143 ] 00:36:24.143 }' 00:36:24.143 16:13:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:24.143 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:36:24.708 16:13:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:24.974 [2024-07-22 16:13:29.070636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:24.974 BaseBdev2 00:36:24.974 16:13:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:36:24.974 16:13:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:36:24.974 16:13:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:24.974 16:13:29 -- common/autotest_common.sh@889 -- # local i 00:36:24.974 16:13:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:24.974 16:13:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:24.974 16:13:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:25.231 16:13:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:25.490 [ 00:36:25.490 { 00:36:25.490 "name": "BaseBdev2", 00:36:25.490 "aliases": [ 00:36:25.490 "54adc630-0c0e-42ec-aeb8-e094c9efcbbb" 00:36:25.490 ], 00:36:25.490 "product_name": "Malloc disk", 00:36:25.490 "block_size": 512, 00:36:25.490 "num_blocks": 65536, 00:36:25.490 "uuid": "54adc630-0c0e-42ec-aeb8-e094c9efcbbb", 00:36:25.490 "assigned_rate_limits": { 00:36:25.490 "rw_ios_per_sec": 0, 00:36:25.490 "rw_mbytes_per_sec": 0, 00:36:25.490 "r_mbytes_per_sec": 0, 00:36:25.490 "w_mbytes_per_sec": 0 00:36:25.490 }, 00:36:25.490 "claimed": true, 00:36:25.490 "claim_type": "exclusive_write", 00:36:25.490 "zoned": false, 00:36:25.490 "supported_io_types": { 00:36:25.490 "read": true, 00:36:25.490 "write": true, 00:36:25.490 "unmap": true, 00:36:25.490 "write_zeroes": true, 00:36:25.490 "flush": true, 00:36:25.490 "reset": true, 00:36:25.490 "compare": false, 00:36:25.490 "compare_and_write": false, 00:36:25.490 "abort": true, 00:36:25.490 "nvme_admin": false, 00:36:25.490 "nvme_io": false 00:36:25.490 }, 00:36:25.490 "memory_domains": [ 00:36:25.490 { 00:36:25.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:25.490 "dma_device_type": 2 00:36:25.490 } 00:36:25.490 ], 00:36:25.490 "driver_specific": {} 00:36:25.490 } 00:36:25.490 ] 00:36:25.490 16:13:29 -- common/autotest_common.sh@895 -- # return 0 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.490 16:13:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:25.749 16:13:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:25.749 "name": "Existed_Raid", 00:36:25.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.749 "strip_size_kb": 64, 00:36:25.749 "state": "configuring", 00:36:25.749 "raid_level": "concat", 00:36:25.749 "superblock": false, 00:36:25.749 "num_base_bdevs": 3, 00:36:25.749 "num_base_bdevs_discovered": 2, 00:36:25.749 "num_base_bdevs_operational": 3, 00:36:25.749 "base_bdevs_list": [ 00:36:25.749 { 00:36:25.749 "name": "BaseBdev1", 00:36:25.749 "uuid": "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36", 00:36:25.749 "is_configured": true, 00:36:25.749 "data_offset": 0, 00:36:25.749 "data_size": 65536 00:36:25.749 }, 00:36:25.749 { 00:36:25.749 "name": "BaseBdev2", 00:36:25.749 "uuid": "54adc630-0c0e-42ec-aeb8-e094c9efcbbb", 00:36:25.749 "is_configured": true, 00:36:25.749 "data_offset": 0, 00:36:25.749 "data_size": 65536 00:36:25.749 }, 00:36:25.749 { 00:36:25.749 "name": "BaseBdev3", 00:36:25.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:25.749 "is_configured": false, 00:36:25.749 "data_offset": 0, 00:36:25.749 "data_size": 0 00:36:25.749 } 00:36:25.749 ] 00:36:25.749 }' 00:36:25.749 16:13:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:25.749 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:36:26.007 16:13:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:26.264 [2024-07-22 16:13:30.515156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:26.264 [2024-07-22 16:13:30.515476] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:36:26.264 [2024-07-22 16:13:30.515537] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:26.264 [2024-07-22 16:13:30.515784] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:36:26.264 [2024-07-22 16:13:30.516415] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:36:26.264 [2024-07-22 16:13:30.516586] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:36:26.264 [2024-07-22 16:13:30.517045] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.264 BaseBdev3 00:36:26.522 16:13:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:36:26.522 16:13:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:36:26.522 16:13:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:26.522 16:13:30 -- common/autotest_common.sh@889 -- # local i 00:36:26.522 16:13:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:26.522 16:13:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:26.522 16:13:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:26.780 16:13:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:26.780 [ 00:36:26.780 { 00:36:26.780 "name": "BaseBdev3", 00:36:26.780 "aliases": [ 00:36:26.780 "995bf43f-1b06-46c6-a25a-4ad2c7f75df1" 00:36:26.780 ], 00:36:26.780 "product_name": "Malloc disk", 00:36:26.780 "block_size": 512, 00:36:26.780 "num_blocks": 65536, 00:36:26.780 "uuid": "995bf43f-1b06-46c6-a25a-4ad2c7f75df1", 00:36:26.780 "assigned_rate_limits": { 00:36:26.780 "rw_ios_per_sec": 0, 00:36:26.780 "rw_mbytes_per_sec": 0, 00:36:26.780 "r_mbytes_per_sec": 0, 00:36:26.780 "w_mbytes_per_sec": 0 00:36:26.780 }, 00:36:26.780 "claimed": true, 00:36:26.780 "claim_type": "exclusive_write", 00:36:26.780 "zoned": false, 00:36:26.780 "supported_io_types": { 00:36:26.780 "read": true, 00:36:26.780 "write": true, 00:36:26.780 "unmap": true, 00:36:26.780 "write_zeroes": true, 00:36:26.780 "flush": true, 00:36:26.780 "reset": true, 00:36:26.780 "compare": false, 00:36:26.780 "compare_and_write": false, 00:36:26.780 "abort": true, 00:36:26.780 "nvme_admin": false, 00:36:26.780 "nvme_io": false 00:36:26.780 }, 00:36:26.780 "memory_domains": [ 00:36:26.780 { 00:36:26.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:26.780 "dma_device_type": 2 00:36:26.780 } 00:36:26.780 ], 00:36:26.780 "driver_specific": {} 00:36:26.780 } 00:36:26.780 ] 00:36:27.038 16:13:31 -- common/autotest_common.sh@895 -- # return 0 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.038 16:13:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:27.297 16:13:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:27.297 "name": "Existed_Raid", 00:36:27.297 "uuid": "ff30eb07-2894-4d6f-a065-aeb696e47385", 00:36:27.297 "strip_size_kb": 64, 00:36:27.297 "state": "online", 00:36:27.297 "raid_level": "concat", 00:36:27.297 "superblock": false, 00:36:27.297 "num_base_bdevs": 3, 00:36:27.297 "num_base_bdevs_discovered": 3, 00:36:27.297 "num_base_bdevs_operational": 3, 00:36:27.297 "base_bdevs_list": [ 00:36:27.297 { 00:36:27.297 "name": "BaseBdev1", 00:36:27.297 "uuid": "d7d9410b-5bc6-4db6-bd2d-9883d6d22a36", 00:36:27.297 "is_configured": true, 00:36:27.297 "data_offset": 0, 00:36:27.297 "data_size": 65536 00:36:27.297 }, 00:36:27.297 { 00:36:27.297 "name": "BaseBdev2", 00:36:27.297 "uuid": "54adc630-0c0e-42ec-aeb8-e094c9efcbbb", 00:36:27.297 "is_configured": true, 00:36:27.297 "data_offset": 0, 00:36:27.297 "data_size": 65536 00:36:27.297 }, 00:36:27.297 { 00:36:27.297 "name": "BaseBdev3", 00:36:27.297 "uuid": "995bf43f-1b06-46c6-a25a-4ad2c7f75df1", 00:36:27.297 "is_configured": true, 00:36:27.297 "data_offset": 0, 00:36:27.297 "data_size": 65536 00:36:27.297 } 00:36:27.297 ] 00:36:27.297 }' 00:36:27.297 16:13:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:27.297 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:36:27.555 16:13:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:27.813 [2024-07-22 16:13:31.871885] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:27.813 [2024-07-22 16:13:31.872111] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:27.813 [2024-07-22 16:13:31.872292] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.813 16:13:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:28.071 16:13:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:28.071 "name": "Existed_Raid", 00:36:28.071 "uuid": "ff30eb07-2894-4d6f-a065-aeb696e47385", 00:36:28.071 "strip_size_kb": 64, 00:36:28.071 "state": "offline", 00:36:28.071 "raid_level": "concat", 00:36:28.071 "superblock": false, 00:36:28.071 "num_base_bdevs": 3, 00:36:28.071 "num_base_bdevs_discovered": 2, 00:36:28.071 "num_base_bdevs_operational": 2, 00:36:28.071 "base_bdevs_list": [ 00:36:28.071 { 00:36:28.071 "name": null, 00:36:28.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.071 "is_configured": false, 00:36:28.071 "data_offset": 0, 00:36:28.071 "data_size": 65536 00:36:28.071 }, 00:36:28.071 { 00:36:28.072 "name": "BaseBdev2", 00:36:28.072 "uuid": "54adc630-0c0e-42ec-aeb8-e094c9efcbbb", 00:36:28.072 "is_configured": true, 00:36:28.072 "data_offset": 0, 00:36:28.072 "data_size": 65536 00:36:28.072 }, 00:36:28.072 { 00:36:28.072 "name": "BaseBdev3", 00:36:28.072 "uuid": "995bf43f-1b06-46c6-a25a-4ad2c7f75df1", 00:36:28.072 "is_configured": true, 00:36:28.072 "data_offset": 0, 00:36:28.072 "data_size": 65536 00:36:28.072 } 00:36:28.072 ] 00:36:28.072 }' 00:36:28.072 16:13:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:28.072 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:36:28.330 16:13:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:36:28.330 16:13:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:28.330 16:13:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.330 16:13:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:28.588 16:13:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:28.588 16:13:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:28.588 16:13:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:28.846 [2024-07-22 16:13:33.055192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:29.104 16:13:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:29.104 16:13:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:29.104 16:13:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.104 16:13:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:29.362 16:13:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:29.362 16:13:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:29.362 16:13:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:29.620 [2024-07-22 16:13:33.668739] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:29.620 [2024-07-22 16:13:33.668834] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:36:29.620 16:13:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:29.620 16:13:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:29.620 16:13:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:29.620 16:13:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:36:29.877 16:13:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:36:29.877 16:13:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:36:29.877 16:13:34 -- bdev/bdev_raid.sh@287 -- # killprocess 73408 00:36:29.877 16:13:34 -- common/autotest_common.sh@926 -- # '[' -z 73408 ']' 00:36:29.877 16:13:34 -- common/autotest_common.sh@930 -- # kill -0 73408 00:36:29.877 16:13:34 -- common/autotest_common.sh@931 -- # uname 00:36:29.877 16:13:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:29.877 16:13:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73408 00:36:29.877 16:13:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:29.877 killing process with pid 73408 00:36:29.877 16:13:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:29.877 16:13:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73408' 00:36:29.877 16:13:34 -- common/autotest_common.sh@945 -- # kill 73408 00:36:29.877 [2024-07-22 16:13:34.080135] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:29.877 16:13:34 -- common/autotest_common.sh@950 -- # wait 73408 00:36:29.877 [2024-07-22 16:13:34.080281] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:31.249 ************************************ 00:36:31.249 END TEST raid_state_function_test 00:36:31.249 ************************************ 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:31.249 00:36:31.249 real 0m11.513s 00:36:31.249 user 0m18.808s 00:36:31.249 sys 0m1.892s 00:36:31.249 16:13:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:31.249 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:36:31.249 16:13:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:36:31.249 16:13:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:31.249 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:36:31.249 ************************************ 00:36:31.249 START TEST raid_state_function_test_sb 00:36:31.249 ************************************ 00:36:31.249 16:13:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=73764 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73764' 00:36:31.249 Process raid pid: 73764 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:31.249 16:13:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73764 /var/tmp/spdk-raid.sock 00:36:31.249 16:13:35 -- common/autotest_common.sh@819 -- # '[' -z 73764 ']' 00:36:31.249 16:13:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:31.249 16:13:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:31.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:31.249 16:13:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:31.249 16:13:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:31.249 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:36:31.249 [2024-07-22 16:13:35.514952] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:36:31.249 [2024-07-22 16:13:35.515147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.507 [2024-07-22 16:13:35.695662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.775 [2024-07-22 16:13:35.957752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.048 [2024-07-22 16:13:36.170832] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:32.305 16:13:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:32.305 16:13:36 -- common/autotest_common.sh@852 -- # return 0 00:36:32.305 16:13:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:32.589 [2024-07-22 16:13:36.758563] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:32.589 [2024-07-22 16:13:36.758647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:32.589 [2024-07-22 16:13:36.758665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:32.589 [2024-07-22 16:13:36.758684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:32.589 [2024-07-22 16:13:36.758695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:32.589 [2024-07-22 16:13:36.758712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.589 16:13:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:32.846 16:13:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:32.846 "name": "Existed_Raid", 00:36:32.846 "uuid": "28813afe-75eb-4f70-a42e-20c5b9691754", 00:36:32.846 "strip_size_kb": 64, 00:36:32.846 "state": "configuring", 00:36:32.846 "raid_level": "concat", 00:36:32.846 "superblock": true, 00:36:32.846 "num_base_bdevs": 3, 00:36:32.846 "num_base_bdevs_discovered": 0, 00:36:32.846 "num_base_bdevs_operational": 3, 00:36:32.846 "base_bdevs_list": [ 00:36:32.846 { 00:36:32.846 "name": "BaseBdev1", 00:36:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.846 "is_configured": false, 00:36:32.846 "data_offset": 0, 00:36:32.846 "data_size": 0 00:36:32.846 }, 00:36:32.846 { 00:36:32.846 "name": "BaseBdev2", 00:36:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.846 "is_configured": false, 00:36:32.846 "data_offset": 0, 00:36:32.846 "data_size": 0 00:36:32.846 }, 00:36:32.846 { 00:36:32.846 "name": "BaseBdev3", 00:36:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.846 "is_configured": false, 00:36:32.846 "data_offset": 0, 00:36:32.846 "data_size": 0 00:36:32.846 } 00:36:32.846 ] 00:36:32.846 }' 00:36:32.846 16:13:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:32.846 16:13:37 -- common/autotest_common.sh@10 -- # set +x 00:36:33.104 16:13:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:33.362 [2024-07-22 16:13:37.534595] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:33.362 [2024-07-22 16:13:37.534887] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:36:33.362 16:13:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:33.620 [2024-07-22 16:13:37.758766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:33.620 [2024-07-22 16:13:37.759131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:33.620 [2024-07-22 16:13:37.759249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:33.620 [2024-07-22 16:13:37.759314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:33.620 [2024-07-22 16:13:37.759424] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:33.620 [2024-07-22 16:13:37.759582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:33.620 16:13:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:33.879 [2024-07-22 16:13:38.070642] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:33.879 BaseBdev1 00:36:33.879 16:13:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:36:33.879 16:13:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:36:33.879 16:13:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:33.879 16:13:38 -- common/autotest_common.sh@889 -- # local i 00:36:33.879 16:13:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:33.879 16:13:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:33.879 16:13:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:34.137 16:13:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:34.396 [ 00:36:34.396 { 00:36:34.396 "name": "BaseBdev1", 00:36:34.396 "aliases": [ 00:36:34.396 "51525f34-0753-4881-b041-df9093478e65" 00:36:34.396 ], 00:36:34.396 "product_name": "Malloc disk", 00:36:34.396 "block_size": 512, 00:36:34.396 "num_blocks": 65536, 00:36:34.396 "uuid": "51525f34-0753-4881-b041-df9093478e65", 00:36:34.396 "assigned_rate_limits": { 00:36:34.396 "rw_ios_per_sec": 0, 00:36:34.396 "rw_mbytes_per_sec": 0, 00:36:34.396 "r_mbytes_per_sec": 0, 00:36:34.396 "w_mbytes_per_sec": 0 00:36:34.396 }, 00:36:34.396 "claimed": true, 00:36:34.396 "claim_type": "exclusive_write", 00:36:34.396 "zoned": false, 00:36:34.396 "supported_io_types": { 00:36:34.396 "read": true, 00:36:34.396 "write": true, 00:36:34.396 "unmap": true, 00:36:34.396 "write_zeroes": true, 00:36:34.396 "flush": true, 00:36:34.396 "reset": true, 00:36:34.396 "compare": false, 00:36:34.396 "compare_and_write": false, 00:36:34.396 "abort": true, 00:36:34.396 "nvme_admin": false, 00:36:34.396 "nvme_io": false 00:36:34.396 }, 00:36:34.396 "memory_domains": [ 00:36:34.396 { 00:36:34.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:34.396 "dma_device_type": 2 00:36:34.396 } 00:36:34.396 ], 00:36:34.396 "driver_specific": {} 00:36:34.396 } 00:36:34.396 ] 00:36:34.396 16:13:38 -- common/autotest_common.sh@895 -- # return 0 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.396 16:13:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:34.654 16:13:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:34.654 "name": "Existed_Raid", 00:36:34.654 "uuid": "4807699a-e43c-4c87-952f-42f7d4252aef", 00:36:34.654 "strip_size_kb": 64, 00:36:34.654 "state": "configuring", 00:36:34.654 "raid_level": "concat", 00:36:34.654 "superblock": true, 00:36:34.654 "num_base_bdevs": 3, 00:36:34.654 "num_base_bdevs_discovered": 1, 00:36:34.654 "num_base_bdevs_operational": 3, 00:36:34.654 "base_bdevs_list": [ 00:36:34.655 { 00:36:34.655 "name": "BaseBdev1", 00:36:34.655 "uuid": "51525f34-0753-4881-b041-df9093478e65", 00:36:34.655 "is_configured": true, 00:36:34.655 "data_offset": 2048, 00:36:34.655 "data_size": 63488 00:36:34.655 }, 00:36:34.655 { 00:36:34.655 "name": "BaseBdev2", 00:36:34.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.655 "is_configured": false, 00:36:34.655 "data_offset": 0, 00:36:34.655 "data_size": 0 00:36:34.655 }, 00:36:34.655 { 00:36:34.655 "name": "BaseBdev3", 00:36:34.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.655 "is_configured": false, 00:36:34.655 "data_offset": 0, 00:36:34.655 "data_size": 0 00:36:34.655 } 00:36:34.655 ] 00:36:34.655 }' 00:36:34.655 16:13:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:34.655 16:13:38 -- common/autotest_common.sh@10 -- # set +x 00:36:35.221 16:13:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:35.479 [2024-07-22 16:13:39.503166] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:35.479 [2024-07-22 16:13:39.503476] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:36:35.479 16:13:39 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:36:35.479 16:13:39 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:35.736 16:13:39 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:35.994 BaseBdev1 00:36:35.994 16:13:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:36:35.994 16:13:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:36:35.994 16:13:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:35.994 16:13:40 -- common/autotest_common.sh@889 -- # local i 00:36:35.994 16:13:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:35.994 16:13:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:35.994 16:13:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:36.252 16:13:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:36.540 [ 00:36:36.540 { 00:36:36.540 "name": "BaseBdev1", 00:36:36.540 "aliases": [ 00:36:36.540 "10cc3e81-b40b-4d3f-a38e-1e2746aa6bfc" 00:36:36.540 ], 00:36:36.540 "product_name": "Malloc disk", 00:36:36.540 "block_size": 512, 00:36:36.540 "num_blocks": 65536, 00:36:36.540 "uuid": "10cc3e81-b40b-4d3f-a38e-1e2746aa6bfc", 00:36:36.540 "assigned_rate_limits": { 00:36:36.541 "rw_ios_per_sec": 0, 00:36:36.541 "rw_mbytes_per_sec": 0, 00:36:36.541 "r_mbytes_per_sec": 0, 00:36:36.541 "w_mbytes_per_sec": 0 00:36:36.541 }, 00:36:36.541 "claimed": false, 00:36:36.541 "zoned": false, 00:36:36.541 "supported_io_types": { 00:36:36.541 "read": true, 00:36:36.541 "write": true, 00:36:36.541 "unmap": true, 00:36:36.541 "write_zeroes": true, 00:36:36.541 "flush": true, 00:36:36.541 "reset": true, 00:36:36.541 "compare": false, 00:36:36.541 "compare_and_write": false, 00:36:36.541 "abort": true, 00:36:36.541 "nvme_admin": false, 00:36:36.541 "nvme_io": false 00:36:36.541 }, 00:36:36.541 "memory_domains": [ 00:36:36.541 { 00:36:36.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:36.541 "dma_device_type": 2 00:36:36.541 } 00:36:36.541 ], 00:36:36.541 "driver_specific": {} 00:36:36.541 } 00:36:36.541 ] 00:36:36.541 16:13:40 -- common/autotest_common.sh@895 -- # return 0 00:36:36.541 16:13:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:36.798 [2024-07-22 16:13:40.875405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:36.798 [2024-07-22 16:13:40.877967] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:36.798 [2024-07-22 16:13:40.878044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:36.798 [2024-07-22 16:13:40.878062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:36.798 [2024-07-22 16:13:40.878079] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:36.798 16:13:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.799 16:13:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:37.057 16:13:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:37.057 "name": "Existed_Raid", 00:36:37.057 "uuid": "2403c6ff-55af-4065-9053-08b055927095", 00:36:37.057 "strip_size_kb": 64, 00:36:37.057 "state": "configuring", 00:36:37.057 "raid_level": "concat", 00:36:37.057 "superblock": true, 00:36:37.057 "num_base_bdevs": 3, 00:36:37.057 "num_base_bdevs_discovered": 1, 00:36:37.057 "num_base_bdevs_operational": 3, 00:36:37.057 "base_bdevs_list": [ 00:36:37.057 { 00:36:37.057 "name": "BaseBdev1", 00:36:37.057 "uuid": "10cc3e81-b40b-4d3f-a38e-1e2746aa6bfc", 00:36:37.057 "is_configured": true, 00:36:37.057 "data_offset": 2048, 00:36:37.057 "data_size": 63488 00:36:37.057 }, 00:36:37.057 { 00:36:37.057 "name": "BaseBdev2", 00:36:37.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.057 "is_configured": false, 00:36:37.057 "data_offset": 0, 00:36:37.057 "data_size": 0 00:36:37.057 }, 00:36:37.057 { 00:36:37.057 "name": "BaseBdev3", 00:36:37.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.057 "is_configured": false, 00:36:37.057 "data_offset": 0, 00:36:37.057 "data_size": 0 00:36:37.057 } 00:36:37.057 ] 00:36:37.057 }' 00:36:37.057 16:13:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:37.057 16:13:41 -- common/autotest_common.sh@10 -- # set +x 00:36:37.314 16:13:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:36:37.881 [2024-07-22 16:13:41.855473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:37.881 BaseBdev2 00:36:37.881 16:13:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:36:37.881 16:13:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:36:37.881 16:13:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:37.881 16:13:41 -- common/autotest_common.sh@889 -- # local i 00:36:37.881 16:13:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:37.881 16:13:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:37.881 16:13:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:37.881 16:13:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:38.139 [ 00:36:38.139 { 00:36:38.139 "name": "BaseBdev2", 00:36:38.139 "aliases": [ 00:36:38.139 "cf043e15-b2e8-4c23-b5eb-6f1b1a60c1d8" 00:36:38.139 ], 00:36:38.139 "product_name": "Malloc disk", 00:36:38.139 "block_size": 512, 00:36:38.139 "num_blocks": 65536, 00:36:38.139 "uuid": "cf043e15-b2e8-4c23-b5eb-6f1b1a60c1d8", 00:36:38.139 "assigned_rate_limits": { 00:36:38.139 "rw_ios_per_sec": 0, 00:36:38.139 "rw_mbytes_per_sec": 0, 00:36:38.139 "r_mbytes_per_sec": 0, 00:36:38.139 "w_mbytes_per_sec": 0 00:36:38.139 }, 00:36:38.139 "claimed": true, 00:36:38.139 "claim_type": "exclusive_write", 00:36:38.139 "zoned": false, 00:36:38.139 "supported_io_types": { 00:36:38.139 "read": true, 00:36:38.139 "write": true, 00:36:38.139 "unmap": true, 00:36:38.139 "write_zeroes": true, 00:36:38.139 "flush": true, 00:36:38.139 "reset": true, 00:36:38.139 "compare": false, 00:36:38.139 "compare_and_write": false, 00:36:38.139 "abort": true, 00:36:38.139 "nvme_admin": false, 00:36:38.140 "nvme_io": false 00:36:38.140 }, 00:36:38.140 "memory_domains": [ 00:36:38.140 { 00:36:38.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:38.140 "dma_device_type": 2 00:36:38.140 } 00:36:38.140 ], 00:36:38.140 "driver_specific": {} 00:36:38.140 } 00:36:38.140 ] 00:36:38.140 16:13:42 -- common/autotest_common.sh@895 -- # return 0 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:38.140 16:13:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:38.398 16:13:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:38.398 "name": "Existed_Raid", 00:36:38.398 "uuid": "2403c6ff-55af-4065-9053-08b055927095", 00:36:38.398 "strip_size_kb": 64, 00:36:38.398 "state": "configuring", 00:36:38.398 "raid_level": "concat", 00:36:38.398 "superblock": true, 00:36:38.398 "num_base_bdevs": 3, 00:36:38.398 "num_base_bdevs_discovered": 2, 00:36:38.398 "num_base_bdevs_operational": 3, 00:36:38.398 "base_bdevs_list": [ 00:36:38.398 { 00:36:38.398 "name": "BaseBdev1", 00:36:38.398 "uuid": "10cc3e81-b40b-4d3f-a38e-1e2746aa6bfc", 00:36:38.398 "is_configured": true, 00:36:38.398 "data_offset": 2048, 00:36:38.398 "data_size": 63488 00:36:38.398 }, 00:36:38.398 { 00:36:38.398 "name": "BaseBdev2", 00:36:38.398 "uuid": "cf043e15-b2e8-4c23-b5eb-6f1b1a60c1d8", 00:36:38.398 "is_configured": true, 00:36:38.398 "data_offset": 2048, 00:36:38.398 "data_size": 63488 00:36:38.398 }, 00:36:38.398 { 00:36:38.398 "name": "BaseBdev3", 00:36:38.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.398 "is_configured": false, 00:36:38.398 "data_offset": 0, 00:36:38.398 "data_size": 0 00:36:38.398 } 00:36:38.398 ] 00:36:38.398 }' 00:36:38.398 16:13:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:38.398 16:13:42 -- common/autotest_common.sh@10 -- # set +x 00:36:38.656 16:13:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:36:39.222 [2024-07-22 16:13:43.211831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:39.222 [2024-07-22 16:13:43.212463] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:36:39.222 [2024-07-22 16:13:43.212617] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:39.222 [2024-07-22 16:13:43.212871] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:36:39.222 [2024-07-22 16:13:43.213446] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:36:39.222 [2024-07-22 16:13:43.213593] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:36:39.222 BaseBdev3 00:36:39.222 [2024-07-22 16:13:43.213918] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:39.222 16:13:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:36:39.222 16:13:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:36:39.222 16:13:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:39.222 16:13:43 -- common/autotest_common.sh@889 -- # local i 00:36:39.222 16:13:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:39.222 16:13:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:39.222 16:13:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:39.480 16:13:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:39.480 [ 00:36:39.480 { 00:36:39.480 "name": "BaseBdev3", 00:36:39.480 "aliases": [ 00:36:39.480 "d89b9eaa-1039-44c8-8a68-42090e19d1ac" 00:36:39.480 ], 00:36:39.480 "product_name": "Malloc disk", 00:36:39.480 "block_size": 512, 00:36:39.480 "num_blocks": 65536, 00:36:39.480 "uuid": "d89b9eaa-1039-44c8-8a68-42090e19d1ac", 00:36:39.480 "assigned_rate_limits": { 00:36:39.480 "rw_ios_per_sec": 0, 00:36:39.480 "rw_mbytes_per_sec": 0, 00:36:39.480 "r_mbytes_per_sec": 0, 00:36:39.480 "w_mbytes_per_sec": 0 00:36:39.480 }, 00:36:39.480 "claimed": true, 00:36:39.480 "claim_type": "exclusive_write", 00:36:39.480 "zoned": false, 00:36:39.480 "supported_io_types": { 00:36:39.480 "read": true, 00:36:39.480 "write": true, 00:36:39.480 "unmap": true, 00:36:39.480 "write_zeroes": true, 00:36:39.480 "flush": true, 00:36:39.480 "reset": true, 00:36:39.480 "compare": false, 00:36:39.480 "compare_and_write": false, 00:36:39.480 "abort": true, 00:36:39.480 "nvme_admin": false, 00:36:39.480 "nvme_io": false 00:36:39.480 }, 00:36:39.480 "memory_domains": [ 00:36:39.480 { 00:36:39.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:39.480 "dma_device_type": 2 00:36:39.480 } 00:36:39.480 ], 00:36:39.480 "driver_specific": {} 00:36:39.480 } 00:36:39.480 ] 00:36:39.480 16:13:43 -- common/autotest_common.sh@895 -- # return 0 00:36:39.480 16:13:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:36:39.480 16:13:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:39.480 16:13:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:36:39.480 16:13:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:39.480 16:13:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:39.481 16:13:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:40.047 16:13:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:40.047 "name": "Existed_Raid", 00:36:40.047 "uuid": "2403c6ff-55af-4065-9053-08b055927095", 00:36:40.047 "strip_size_kb": 64, 00:36:40.047 "state": "online", 00:36:40.047 "raid_level": "concat", 00:36:40.047 "superblock": true, 00:36:40.047 "num_base_bdevs": 3, 00:36:40.047 "num_base_bdevs_discovered": 3, 00:36:40.047 "num_base_bdevs_operational": 3, 00:36:40.047 "base_bdevs_list": [ 00:36:40.047 { 00:36:40.047 "name": "BaseBdev1", 00:36:40.047 "uuid": "10cc3e81-b40b-4d3f-a38e-1e2746aa6bfc", 00:36:40.047 "is_configured": true, 00:36:40.047 "data_offset": 2048, 00:36:40.047 "data_size": 63488 00:36:40.047 }, 00:36:40.047 { 00:36:40.047 "name": "BaseBdev2", 00:36:40.047 "uuid": "cf043e15-b2e8-4c23-b5eb-6f1b1a60c1d8", 00:36:40.047 "is_configured": true, 00:36:40.047 "data_offset": 2048, 00:36:40.047 "data_size": 63488 00:36:40.047 }, 00:36:40.047 { 00:36:40.047 "name": "BaseBdev3", 00:36:40.047 "uuid": "d89b9eaa-1039-44c8-8a68-42090e19d1ac", 00:36:40.047 "is_configured": true, 00:36:40.047 "data_offset": 2048, 00:36:40.047 "data_size": 63488 00:36:40.047 } 00:36:40.047 ] 00:36:40.047 }' 00:36:40.047 16:13:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:40.047 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:36:40.305 16:13:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:40.564 [2024-07-22 16:13:44.600453] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:40.564 [2024-07-22 16:13:44.600728] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:40.564 [2024-07-22 16:13:44.600918] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.564 16:13:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:40.846 16:13:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:40.846 "name": "Existed_Raid", 00:36:40.846 "uuid": "2403c6ff-55af-4065-9053-08b055927095", 00:36:40.846 "strip_size_kb": 64, 00:36:40.846 "state": "offline", 00:36:40.846 "raid_level": "concat", 00:36:40.846 "superblock": true, 00:36:40.846 "num_base_bdevs": 3, 00:36:40.846 "num_base_bdevs_discovered": 2, 00:36:40.846 "num_base_bdevs_operational": 2, 00:36:40.846 "base_bdevs_list": [ 00:36:40.846 { 00:36:40.846 "name": null, 00:36:40.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.846 "is_configured": false, 00:36:40.846 "data_offset": 2048, 00:36:40.846 "data_size": 63488 00:36:40.846 }, 00:36:40.846 { 00:36:40.846 "name": "BaseBdev2", 00:36:40.846 "uuid": "cf043e15-b2e8-4c23-b5eb-6f1b1a60c1d8", 00:36:40.846 "is_configured": true, 00:36:40.846 "data_offset": 2048, 00:36:40.846 "data_size": 63488 00:36:40.846 }, 00:36:40.846 { 00:36:40.846 "name": "BaseBdev3", 00:36:40.846 "uuid": "d89b9eaa-1039-44c8-8a68-42090e19d1ac", 00:36:40.846 "is_configured": true, 00:36:40.846 "data_offset": 2048, 00:36:40.846 "data_size": 63488 00:36:40.846 } 00:36:40.846 ] 00:36:40.846 }' 00:36:40.846 16:13:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:40.846 16:13:44 -- common/autotest_common.sh@10 -- # set +x 00:36:41.104 16:13:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:36:41.104 16:13:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:41.104 16:13:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.363 16:13:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:41.621 16:13:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:41.621 16:13:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:41.621 16:13:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:41.621 [2024-07-22 16:13:45.875437] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:41.880 16:13:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:41.880 16:13:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:41.880 16:13:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.880 16:13:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:36:42.138 16:13:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:36:42.138 16:13:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:42.138 16:13:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:36:42.397 [2024-07-22 16:13:46.471105] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:42.397 [2024-07-22 16:13:46.471204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:36:42.397 16:13:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:36:42.397 16:13:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:36:42.397 16:13:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.397 16:13:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:36:42.656 16:13:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:36:42.656 16:13:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:36:42.656 16:13:46 -- bdev/bdev_raid.sh@287 -- # killprocess 73764 00:36:42.656 16:13:46 -- common/autotest_common.sh@926 -- # '[' -z 73764 ']' 00:36:42.656 16:13:46 -- common/autotest_common.sh@930 -- # kill -0 73764 00:36:42.656 16:13:46 -- common/autotest_common.sh@931 -- # uname 00:36:42.656 16:13:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:42.656 16:13:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73764 00:36:42.656 killing process with pid 73764 00:36:42.656 16:13:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:42.656 16:13:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:42.656 16:13:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73764' 00:36:42.656 16:13:46 -- common/autotest_common.sh@945 -- # kill 73764 00:36:42.656 16:13:46 -- common/autotest_common.sh@950 -- # wait 73764 00:36:42.656 [2024-07-22 16:13:46.906573] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:42.656 [2024-07-22 16:13:46.906733] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:36:44.032 00:36:44.032 real 0m12.759s 00:36:44.032 user 0m20.984s 00:36:44.032 sys 0m2.090s 00:36:44.032 16:13:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:44.032 ************************************ 00:36:44.032 END TEST raid_state_function_test_sb 00:36:44.032 ************************************ 00:36:44.032 16:13:48 -- common/autotest_common.sh@10 -- # set +x 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:36:44.032 16:13:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:36:44.032 16:13:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:44.032 16:13:48 -- common/autotest_common.sh@10 -- # set +x 00:36:44.032 ************************************ 00:36:44.032 START TEST raid_superblock_test 00:36:44.032 ************************************ 00:36:44.032 16:13:48 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@357 -- # raid_pid=74125 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@358 -- # waitforlisten 74125 /var/tmp/spdk-raid.sock 00:36:44.032 16:13:48 -- common/autotest_common.sh@819 -- # '[' -z 74125 ']' 00:36:44.032 16:13:48 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:44.032 16:13:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:44.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:44.032 16:13:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:44.032 16:13:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:44.032 16:13:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:44.032 16:13:48 -- common/autotest_common.sh@10 -- # set +x 00:36:44.290 [2024-07-22 16:13:48.317243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:36:44.290 [2024-07-22 16:13:48.317394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74125 ] 00:36:44.290 [2024-07-22 16:13:48.485263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.557 [2024-07-22 16:13:48.755836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.815 [2024-07-22 16:13:48.973466] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:45.074 16:13:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:45.074 16:13:49 -- common/autotest_common.sh@852 -- # return 0 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:45.074 16:13:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:36:45.332 malloc1 00:36:45.332 16:13:49 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:45.590 [2024-07-22 16:13:49.747673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:45.590 [2024-07-22 16:13:49.747790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:45.590 [2024-07-22 16:13:49.747844] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:36:45.590 [2024-07-22 16:13:49.747862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:45.590 [2024-07-22 16:13:49.750858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:45.590 [2024-07-22 16:13:49.750904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:45.590 pt1 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:45.590 16:13:49 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:36:45.878 malloc2 00:36:45.879 16:13:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:46.137 [2024-07-22 16:13:50.243823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:46.137 [2024-07-22 16:13:50.243935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:46.137 [2024-07-22 16:13:50.243981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:36:46.137 [2024-07-22 16:13:50.244029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:46.137 [2024-07-22 16:13:50.247038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:46.137 [2024-07-22 16:13:50.247089] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:46.137 pt2 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:46.137 16:13:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:36:46.395 malloc3 00:36:46.395 16:13:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:46.652 [2024-07-22 16:13:50.795644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:46.652 [2024-07-22 16:13:50.795747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:46.653 [2024-07-22 16:13:50.795794] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:36:46.653 [2024-07-22 16:13:50.795813] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:46.653 [2024-07-22 16:13:50.798876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:46.653 [2024-07-22 16:13:50.798936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:46.653 pt3 00:36:46.653 16:13:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:36:46.653 16:13:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:36:46.653 16:13:50 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:36:46.911 [2024-07-22 16:13:51.015841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:46.911 [2024-07-22 16:13:51.018382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:46.911 [2024-07-22 16:13:51.018494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:46.911 [2024-07-22 16:13:51.018742] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:36:46.911 [2024-07-22 16:13:51.018770] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:46.911 [2024-07-22 16:13:51.018939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:36:46.911 [2024-07-22 16:13:51.019424] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:36:46.911 [2024-07-22 16:13:51.019456] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:36:46.911 [2024-07-22 16:13:51.019711] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.911 16:13:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.168 16:13:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:47.168 "name": "raid_bdev1", 00:36:47.168 "uuid": "e7006f24-a811-4c63-bb6b-021da951178e", 00:36:47.168 "strip_size_kb": 64, 00:36:47.168 "state": "online", 00:36:47.168 "raid_level": "concat", 00:36:47.168 "superblock": true, 00:36:47.168 "num_base_bdevs": 3, 00:36:47.168 "num_base_bdevs_discovered": 3, 00:36:47.168 "num_base_bdevs_operational": 3, 00:36:47.168 "base_bdevs_list": [ 00:36:47.168 { 00:36:47.168 "name": "pt1", 00:36:47.168 "uuid": "5f29efeb-f6c9-545a-bc12-14129d5ee3c2", 00:36:47.168 "is_configured": true, 00:36:47.168 "data_offset": 2048, 00:36:47.168 "data_size": 63488 00:36:47.168 }, 00:36:47.168 { 00:36:47.168 "name": "pt2", 00:36:47.168 "uuid": "0aa007f8-d06e-5fcb-a464-a7c473f1923d", 00:36:47.168 "is_configured": true, 00:36:47.168 "data_offset": 2048, 00:36:47.168 "data_size": 63488 00:36:47.168 }, 00:36:47.168 { 00:36:47.168 "name": "pt3", 00:36:47.168 "uuid": "850d5e7f-a9eb-59af-b2f9-5c687024ccba", 00:36:47.168 "is_configured": true, 00:36:47.168 "data_offset": 2048, 00:36:47.168 "data_size": 63488 00:36:47.168 } 00:36:47.168 ] 00:36:47.168 }' 00:36:47.168 16:13:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:47.168 16:13:51 -- common/autotest_common.sh@10 -- # set +x 00:36:47.427 16:13:51 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:36:47.427 16:13:51 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:47.698 [2024-07-22 16:13:51.832349] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:47.698 16:13:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e7006f24-a811-4c63-bb6b-021da951178e 00:36:47.698 16:13:51 -- bdev/bdev_raid.sh@380 -- # '[' -z e7006f24-a811-4c63-bb6b-021da951178e ']' 00:36:47.698 16:13:51 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:47.956 [2024-07-22 16:13:52.060158] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:47.956 [2024-07-22 16:13:52.060225] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:47.956 [2024-07-22 16:13:52.060354] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:47.956 [2024-07-22 16:13:52.060445] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:47.956 [2024-07-22 16:13:52.060466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:36:47.956 16:13:52 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.956 16:13:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:36:48.215 16:13:52 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:36:48.215 16:13:52 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:36:48.215 16:13:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:48.215 16:13:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:48.472 16:13:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:48.472 16:13:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:48.730 16:13:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:36:48.730 16:13:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:36:48.987 16:13:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:48.987 16:13:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:49.245 16:13:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:36:49.245 16:13:53 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:49.245 16:13:53 -- common/autotest_common.sh@640 -- # local es=0 00:36:49.245 16:13:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:49.245 16:13:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:49.245 16:13:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:49.245 16:13:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:49.245 16:13:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:49.245 16:13:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:49.245 16:13:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:36:49.245 16:13:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:49.245 16:13:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:49.245 16:13:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:36:49.504 [2024-07-22 16:13:53.576460] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:49.504 [2024-07-22 16:13:53.579025] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:49.504 [2024-07-22 16:13:53.579101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:49.504 [2024-07-22 16:13:53.579182] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:36:49.504 [2024-07-22 16:13:53.579265] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:36:49.504 [2024-07-22 16:13:53.579303] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:36:49.504 [2024-07-22 16:13:53.579327] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:49.504 [2024-07-22 16:13:53.579346] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:36:49.504 request: 00:36:49.504 { 00:36:49.504 "name": "raid_bdev1", 00:36:49.504 "raid_level": "concat", 00:36:49.504 "base_bdevs": [ 00:36:49.504 "malloc1", 00:36:49.504 "malloc2", 00:36:49.505 "malloc3" 00:36:49.505 ], 00:36:49.505 "superblock": false, 00:36:49.505 "strip_size_kb": 64, 00:36:49.505 "method": "bdev_raid_create", 00:36:49.505 "req_id": 1 00:36:49.505 } 00:36:49.505 Got JSON-RPC error response 00:36:49.505 response: 00:36:49.505 { 00:36:49.505 "code": -17, 00:36:49.505 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:49.505 } 00:36:49.505 16:13:53 -- common/autotest_common.sh@643 -- # es=1 00:36:49.505 16:13:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:36:49.505 16:13:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:36:49.505 16:13:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:36:49.505 16:13:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.505 16:13:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:36:49.764 16:13:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:36:49.764 16:13:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:36:49.764 16:13:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:50.023 [2024-07-22 16:13:54.092554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:50.023 [2024-07-22 16:13:54.092684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:50.023 [2024-07-22 16:13:54.092722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:36:50.023 [2024-07-22 16:13:54.092742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:50.023 [2024-07-22 16:13:54.095900] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:50.023 [2024-07-22 16:13:54.095952] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:50.023 [2024-07-22 16:13:54.096107] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:36:50.023 [2024-07-22 16:13:54.096191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:50.023 pt1 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:50.023 16:13:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.281 16:13:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:50.281 "name": "raid_bdev1", 00:36:50.281 "uuid": "e7006f24-a811-4c63-bb6b-021da951178e", 00:36:50.281 "strip_size_kb": 64, 00:36:50.281 "state": "configuring", 00:36:50.281 "raid_level": "concat", 00:36:50.281 "superblock": true, 00:36:50.281 "num_base_bdevs": 3, 00:36:50.281 "num_base_bdevs_discovered": 1, 00:36:50.281 "num_base_bdevs_operational": 3, 00:36:50.281 "base_bdevs_list": [ 00:36:50.281 { 00:36:50.281 "name": "pt1", 00:36:50.282 "uuid": "5f29efeb-f6c9-545a-bc12-14129d5ee3c2", 00:36:50.282 "is_configured": true, 00:36:50.282 "data_offset": 2048, 00:36:50.282 "data_size": 63488 00:36:50.282 }, 00:36:50.282 { 00:36:50.282 "name": null, 00:36:50.282 "uuid": "0aa007f8-d06e-5fcb-a464-a7c473f1923d", 00:36:50.282 "is_configured": false, 00:36:50.282 "data_offset": 2048, 00:36:50.282 "data_size": 63488 00:36:50.282 }, 00:36:50.282 { 00:36:50.282 "name": null, 00:36:50.282 "uuid": "850d5e7f-a9eb-59af-b2f9-5c687024ccba", 00:36:50.282 "is_configured": false, 00:36:50.282 "data_offset": 2048, 00:36:50.282 "data_size": 63488 00:36:50.282 } 00:36:50.282 ] 00:36:50.282 }' 00:36:50.282 16:13:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:50.282 16:13:54 -- common/autotest_common.sh@10 -- # set +x 00:36:50.540 16:13:54 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:36:50.540 16:13:54 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:50.798 [2024-07-22 16:13:54.948762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:50.798 [2024-07-22 16:13:54.948882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:50.798 [2024-07-22 16:13:54.948923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:36:50.798 [2024-07-22 16:13:54.948943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:50.798 [2024-07-22 16:13:54.949562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:50.798 [2024-07-22 16:13:54.949604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:50.798 [2024-07-22 16:13:54.949727] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:50.798 [2024-07-22 16:13:54.949763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:50.798 pt2 00:36:50.798 16:13:54 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:51.058 [2024-07-22 16:13:55.264872] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.058 16:13:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.317 16:13:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:51.317 "name": "raid_bdev1", 00:36:51.317 "uuid": "e7006f24-a811-4c63-bb6b-021da951178e", 00:36:51.317 "strip_size_kb": 64, 00:36:51.317 "state": "configuring", 00:36:51.317 "raid_level": "concat", 00:36:51.317 "superblock": true, 00:36:51.317 "num_base_bdevs": 3, 00:36:51.317 "num_base_bdevs_discovered": 1, 00:36:51.317 "num_base_bdevs_operational": 3, 00:36:51.317 "base_bdevs_list": [ 00:36:51.317 { 00:36:51.317 "name": "pt1", 00:36:51.317 "uuid": "5f29efeb-f6c9-545a-bc12-14129d5ee3c2", 00:36:51.317 "is_configured": true, 00:36:51.317 "data_offset": 2048, 00:36:51.317 "data_size": 63488 00:36:51.317 }, 00:36:51.317 { 00:36:51.317 "name": null, 00:36:51.317 "uuid": "0aa007f8-d06e-5fcb-a464-a7c473f1923d", 00:36:51.317 "is_configured": false, 00:36:51.317 "data_offset": 2048, 00:36:51.317 "data_size": 63488 00:36:51.317 }, 00:36:51.317 { 00:36:51.317 "name": null, 00:36:51.317 "uuid": "850d5e7f-a9eb-59af-b2f9-5c687024ccba", 00:36:51.317 "is_configured": false, 00:36:51.317 "data_offset": 2048, 00:36:51.317 "data_size": 63488 00:36:51.317 } 00:36:51.317 ] 00:36:51.317 }' 00:36:51.317 16:13:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:51.317 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:36:51.884 16:13:55 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:36:51.884 16:13:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:51.884 16:13:55 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:52.142 [2024-07-22 16:13:56.225098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:52.142 [2024-07-22 16:13:56.225218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.142 [2024-07-22 16:13:56.225259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:36:52.142 [2024-07-22 16:13:56.225276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.142 [2024-07-22 16:13:56.225903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.142 [2024-07-22 16:13:56.225930] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:52.142 [2024-07-22 16:13:56.226093] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:36:52.142 [2024-07-22 16:13:56.226126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:52.142 pt2 00:36:52.142 16:13:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:52.142 16:13:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:52.142 16:13:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:52.400 [2024-07-22 16:13:56.501233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:52.400 [2024-07-22 16:13:56.501343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.400 [2024-07-22 16:13:56.501384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:36:52.401 [2024-07-22 16:13:56.501401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.401 [2024-07-22 16:13:56.502058] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.401 [2024-07-22 16:13:56.502090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:52.401 [2024-07-22 16:13:56.502219] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:36:52.401 [2024-07-22 16:13:56.502250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:52.401 [2024-07-22 16:13:56.502432] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:36:52.401 [2024-07-22 16:13:56.502448] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:52.401 [2024-07-22 16:13:56.502628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:36:52.401 [2024-07-22 16:13:56.503113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:36:52.401 [2024-07-22 16:13:56.503142] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:36:52.401 [2024-07-22 16:13:56.503304] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:52.401 pt3 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.401 16:13:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.659 16:13:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:52.659 "name": "raid_bdev1", 00:36:52.659 "uuid": "e7006f24-a811-4c63-bb6b-021da951178e", 00:36:52.659 "strip_size_kb": 64, 00:36:52.659 "state": "online", 00:36:52.659 "raid_level": "concat", 00:36:52.659 "superblock": true, 00:36:52.659 "num_base_bdevs": 3, 00:36:52.659 "num_base_bdevs_discovered": 3, 00:36:52.659 "num_base_bdevs_operational": 3, 00:36:52.659 "base_bdevs_list": [ 00:36:52.659 { 00:36:52.659 "name": "pt1", 00:36:52.659 "uuid": "5f29efeb-f6c9-545a-bc12-14129d5ee3c2", 00:36:52.659 "is_configured": true, 00:36:52.659 "data_offset": 2048, 00:36:52.659 "data_size": 63488 00:36:52.659 }, 00:36:52.659 { 00:36:52.659 "name": "pt2", 00:36:52.659 "uuid": "0aa007f8-d06e-5fcb-a464-a7c473f1923d", 00:36:52.659 "is_configured": true, 00:36:52.659 "data_offset": 2048, 00:36:52.659 "data_size": 63488 00:36:52.659 }, 00:36:52.659 { 00:36:52.659 "name": "pt3", 00:36:52.659 "uuid": "850d5e7f-a9eb-59af-b2f9-5c687024ccba", 00:36:52.659 "is_configured": true, 00:36:52.659 "data_offset": 2048, 00:36:52.659 "data_size": 63488 00:36:52.659 } 00:36:52.659 ] 00:36:52.659 }' 00:36:52.659 16:13:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:52.659 16:13:56 -- common/autotest_common.sh@10 -- # set +x 00:36:52.918 16:13:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:52.918 16:13:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:36:53.484 [2024-07-22 16:13:57.453878] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:53.484 16:13:57 -- bdev/bdev_raid.sh@430 -- # '[' e7006f24-a811-4c63-bb6b-021da951178e '!=' e7006f24-a811-4c63-bb6b-021da951178e ']' 00:36:53.484 16:13:57 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:36:53.484 16:13:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:36:53.484 16:13:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:36:53.484 16:13:57 -- bdev/bdev_raid.sh@511 -- # killprocess 74125 00:36:53.484 16:13:57 -- common/autotest_common.sh@926 -- # '[' -z 74125 ']' 00:36:53.484 16:13:57 -- common/autotest_common.sh@930 -- # kill -0 74125 00:36:53.484 16:13:57 -- common/autotest_common.sh@931 -- # uname 00:36:53.484 16:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:36:53.484 16:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74125 00:36:53.484 killing process with pid 74125 00:36:53.484 16:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:36:53.484 16:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:36:53.484 16:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74125' 00:36:53.484 16:13:57 -- common/autotest_common.sh@945 -- # kill 74125 00:36:53.484 [2024-07-22 16:13:57.517099] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:53.484 16:13:57 -- common/autotest_common.sh@950 -- # wait 74125 00:36:53.484 [2024-07-22 16:13:57.517229] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:53.484 [2024-07-22 16:13:57.517314] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:53.484 [2024-07-22 16:13:57.517335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:36:53.742 [2024-07-22 16:13:57.798542] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:36:55.121 ************************************ 00:36:55.121 END TEST raid_superblock_test 00:36:55.121 ************************************ 00:36:55.121 00:36:55.121 real 0m10.859s 00:36:55.121 user 0m17.658s 00:36:55.121 sys 0m1.674s 00:36:55.121 16:13:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:55.121 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:36:55.121 16:13:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:36:55.121 16:13:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:36:55.121 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:36:55.121 ************************************ 00:36:55.121 START TEST raid_state_function_test 00:36:55.121 ************************************ 00:36:55.121 16:13:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:36:55.121 Process raid pid: 74418 00:36:55.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=74418 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74418' 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74418 /var/tmp/spdk-raid.sock 00:36:55.121 16:13:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:55.121 16:13:59 -- common/autotest_common.sh@819 -- # '[' -z 74418 ']' 00:36:55.121 16:13:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:55.121 16:13:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:36:55.121 16:13:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:55.121 16:13:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:36:55.121 16:13:59 -- common/autotest_common.sh@10 -- # set +x 00:36:55.121 [2024-07-22 16:13:59.250637] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:36:55.121 [2024-07-22 16:13:59.251828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.379 [2024-07-22 16:13:59.433936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.637 [2024-07-22 16:13:59.733830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.895 [2024-07-22 16:13:59.949440] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:56.153 16:14:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:36:56.153 16:14:00 -- common/autotest_common.sh@852 -- # return 0 00:36:56.153 16:14:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:56.411 [2024-07-22 16:14:00.430721] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:56.411 [2024-07-22 16:14:00.430839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:56.411 [2024-07-22 16:14:00.430858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:56.411 [2024-07-22 16:14:00.430876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:56.411 [2024-07-22 16:14:00.430887] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:56.411 [2024-07-22 16:14:00.430903] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.411 16:14:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.669 16:14:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:56.669 "name": "Existed_Raid", 00:36:56.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.669 "strip_size_kb": 0, 00:36:56.669 "state": "configuring", 00:36:56.669 "raid_level": "raid1", 00:36:56.669 "superblock": false, 00:36:56.669 "num_base_bdevs": 3, 00:36:56.669 "num_base_bdevs_discovered": 0, 00:36:56.669 "num_base_bdevs_operational": 3, 00:36:56.669 "base_bdevs_list": [ 00:36:56.669 { 00:36:56.669 "name": "BaseBdev1", 00:36:56.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.669 "is_configured": false, 00:36:56.669 "data_offset": 0, 00:36:56.669 "data_size": 0 00:36:56.669 }, 00:36:56.669 { 00:36:56.669 "name": "BaseBdev2", 00:36:56.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.669 "is_configured": false, 00:36:56.669 "data_offset": 0, 00:36:56.669 "data_size": 0 00:36:56.669 }, 00:36:56.669 { 00:36:56.669 "name": "BaseBdev3", 00:36:56.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.669 "is_configured": false, 00:36:56.669 "data_offset": 0, 00:36:56.669 "data_size": 0 00:36:56.669 } 00:36:56.669 ] 00:36:56.669 }' 00:36:56.669 16:14:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:56.669 16:14:00 -- common/autotest_common.sh@10 -- # set +x 00:36:56.928 16:14:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:57.186 [2024-07-22 16:14:01.318810] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:57.186 [2024-07-22 16:14:01.318889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:36:57.186 16:14:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:57.444 [2024-07-22 16:14:01.606929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:57.444 [2024-07-22 16:14:01.607055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:57.444 [2024-07-22 16:14:01.607074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:57.444 [2024-07-22 16:14:01.607097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:57.444 [2024-07-22 16:14:01.607107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:57.444 [2024-07-22 16:14:01.607122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:57.444 16:14:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:36:57.701 [2024-07-22 16:14:01.877826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:57.701 BaseBdev1 00:36:57.701 16:14:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:36:57.701 16:14:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:36:57.701 16:14:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:36:57.701 16:14:01 -- common/autotest_common.sh@889 -- # local i 00:36:57.701 16:14:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:36:57.701 16:14:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:36:57.701 16:14:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:57.960 16:14:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:58.219 [ 00:36:58.219 { 00:36:58.219 "name": "BaseBdev1", 00:36:58.219 "aliases": [ 00:36:58.219 "738d9521-777b-4999-9bcf-60c0507438e2" 00:36:58.219 ], 00:36:58.219 "product_name": "Malloc disk", 00:36:58.219 "block_size": 512, 00:36:58.219 "num_blocks": 65536, 00:36:58.219 "uuid": "738d9521-777b-4999-9bcf-60c0507438e2", 00:36:58.219 "assigned_rate_limits": { 00:36:58.219 "rw_ios_per_sec": 0, 00:36:58.219 "rw_mbytes_per_sec": 0, 00:36:58.219 "r_mbytes_per_sec": 0, 00:36:58.219 "w_mbytes_per_sec": 0 00:36:58.219 }, 00:36:58.219 "claimed": true, 00:36:58.219 "claim_type": "exclusive_write", 00:36:58.219 "zoned": false, 00:36:58.219 "supported_io_types": { 00:36:58.219 "read": true, 00:36:58.219 "write": true, 00:36:58.219 "unmap": true, 00:36:58.219 "write_zeroes": true, 00:36:58.219 "flush": true, 00:36:58.219 "reset": true, 00:36:58.219 "compare": false, 00:36:58.219 "compare_and_write": false, 00:36:58.219 "abort": true, 00:36:58.219 "nvme_admin": false, 00:36:58.219 "nvme_io": false 00:36:58.219 }, 00:36:58.219 "memory_domains": [ 00:36:58.219 { 00:36:58.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.219 "dma_device_type": 2 00:36:58.219 } 00:36:58.219 ], 00:36:58.219 "driver_specific": {} 00:36:58.219 } 00:36:58.219 ] 00:36:58.219 16:14:02 -- common/autotest_common.sh@895 -- # return 0 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.219 16:14:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:58.495 16:14:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:58.495 "name": "Existed_Raid", 00:36:58.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.495 "strip_size_kb": 0, 00:36:58.495 "state": "configuring", 00:36:58.495 "raid_level": "raid1", 00:36:58.495 "superblock": false, 00:36:58.495 "num_base_bdevs": 3, 00:36:58.495 "num_base_bdevs_discovered": 1, 00:36:58.495 "num_base_bdevs_operational": 3, 00:36:58.495 "base_bdevs_list": [ 00:36:58.495 { 00:36:58.495 "name": "BaseBdev1", 00:36:58.495 "uuid": "738d9521-777b-4999-9bcf-60c0507438e2", 00:36:58.495 "is_configured": true, 00:36:58.495 "data_offset": 0, 00:36:58.495 "data_size": 65536 00:36:58.495 }, 00:36:58.495 { 00:36:58.495 "name": "BaseBdev2", 00:36:58.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.495 "is_configured": false, 00:36:58.495 "data_offset": 0, 00:36:58.495 "data_size": 0 00:36:58.495 }, 00:36:58.495 { 00:36:58.495 "name": "BaseBdev3", 00:36:58.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.495 "is_configured": false, 00:36:58.495 "data_offset": 0, 00:36:58.495 "data_size": 0 00:36:58.495 } 00:36:58.495 ] 00:36:58.495 }' 00:36:58.495 16:14:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:58.495 16:14:02 -- common/autotest_common.sh@10 -- # set +x 00:36:58.771 16:14:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:59.028 [2024-07-22 16:14:03.242387] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:59.028 [2024-07-22 16:14:03.242494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:36:59.028 16:14:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:36:59.028 16:14:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:36:59.286 [2024-07-22 16:14:03.482566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:59.286 [2024-07-22 16:14:03.485266] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:59.286 [2024-07-22 16:14:03.485345] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:59.286 [2024-07-22 16:14:03.485361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:59.286 [2024-07-22 16:14:03.485379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.286 16:14:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:59.544 16:14:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:36:59.544 "name": "Existed_Raid", 00:36:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.544 "strip_size_kb": 0, 00:36:59.544 "state": "configuring", 00:36:59.544 "raid_level": "raid1", 00:36:59.544 "superblock": false, 00:36:59.544 "num_base_bdevs": 3, 00:36:59.544 "num_base_bdevs_discovered": 1, 00:36:59.544 "num_base_bdevs_operational": 3, 00:36:59.544 "base_bdevs_list": [ 00:36:59.544 { 00:36:59.544 "name": "BaseBdev1", 00:36:59.544 "uuid": "738d9521-777b-4999-9bcf-60c0507438e2", 00:36:59.544 "is_configured": true, 00:36:59.544 "data_offset": 0, 00:36:59.544 "data_size": 65536 00:36:59.544 }, 00:36:59.544 { 00:36:59.544 "name": "BaseBdev2", 00:36:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.544 "is_configured": false, 00:36:59.544 "data_offset": 0, 00:36:59.544 "data_size": 0 00:36:59.544 }, 00:36:59.544 { 00:36:59.544 "name": "BaseBdev3", 00:36:59.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.544 "is_configured": false, 00:36:59.544 "data_offset": 0, 00:36:59.544 "data_size": 0 00:36:59.544 } 00:36:59.544 ] 00:36:59.544 }' 00:36:59.802 16:14:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:36:59.802 16:14:03 -- common/autotest_common.sh@10 -- # set +x 00:37:00.060 16:14:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:00.318 [2024-07-22 16:14:04.475459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:00.318 BaseBdev2 00:37:00.318 16:14:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:37:00.318 16:14:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:37:00.318 16:14:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:00.318 16:14:04 -- common/autotest_common.sh@889 -- # local i 00:37:00.318 16:14:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:00.318 16:14:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:00.318 16:14:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:00.584 16:14:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:00.842 [ 00:37:00.842 { 00:37:00.842 "name": "BaseBdev2", 00:37:00.842 "aliases": [ 00:37:00.842 "de548f2f-42f0-46b9-aecb-9e9e6dbf72c5" 00:37:00.842 ], 00:37:00.842 "product_name": "Malloc disk", 00:37:00.842 "block_size": 512, 00:37:00.842 "num_blocks": 65536, 00:37:00.842 "uuid": "de548f2f-42f0-46b9-aecb-9e9e6dbf72c5", 00:37:00.842 "assigned_rate_limits": { 00:37:00.842 "rw_ios_per_sec": 0, 00:37:00.842 "rw_mbytes_per_sec": 0, 00:37:00.842 "r_mbytes_per_sec": 0, 00:37:00.842 "w_mbytes_per_sec": 0 00:37:00.842 }, 00:37:00.842 "claimed": true, 00:37:00.842 "claim_type": "exclusive_write", 00:37:00.842 "zoned": false, 00:37:00.842 "supported_io_types": { 00:37:00.843 "read": true, 00:37:00.843 "write": true, 00:37:00.843 "unmap": true, 00:37:00.843 "write_zeroes": true, 00:37:00.843 "flush": true, 00:37:00.843 "reset": true, 00:37:00.843 "compare": false, 00:37:00.843 "compare_and_write": false, 00:37:00.843 "abort": true, 00:37:00.843 "nvme_admin": false, 00:37:00.843 "nvme_io": false 00:37:00.843 }, 00:37:00.843 "memory_domains": [ 00:37:00.843 { 00:37:00.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.843 "dma_device_type": 2 00:37:00.843 } 00:37:00.843 ], 00:37:00.843 "driver_specific": {} 00:37:00.843 } 00:37:00.843 ] 00:37:00.843 16:14:05 -- common/autotest_common.sh@895 -- # return 0 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.843 16:14:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:01.101 16:14:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:01.101 "name": "Existed_Raid", 00:37:01.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.101 "strip_size_kb": 0, 00:37:01.101 "state": "configuring", 00:37:01.101 "raid_level": "raid1", 00:37:01.101 "superblock": false, 00:37:01.101 "num_base_bdevs": 3, 00:37:01.101 "num_base_bdevs_discovered": 2, 00:37:01.101 "num_base_bdevs_operational": 3, 00:37:01.101 "base_bdevs_list": [ 00:37:01.101 { 00:37:01.101 "name": "BaseBdev1", 00:37:01.101 "uuid": "738d9521-777b-4999-9bcf-60c0507438e2", 00:37:01.101 "is_configured": true, 00:37:01.101 "data_offset": 0, 00:37:01.101 "data_size": 65536 00:37:01.101 }, 00:37:01.101 { 00:37:01.101 "name": "BaseBdev2", 00:37:01.101 "uuid": "de548f2f-42f0-46b9-aecb-9e9e6dbf72c5", 00:37:01.101 "is_configured": true, 00:37:01.101 "data_offset": 0, 00:37:01.101 "data_size": 65536 00:37:01.101 }, 00:37:01.101 { 00:37:01.101 "name": "BaseBdev3", 00:37:01.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.101 "is_configured": false, 00:37:01.101 "data_offset": 0, 00:37:01.101 "data_size": 0 00:37:01.101 } 00:37:01.101 ] 00:37:01.101 }' 00:37:01.101 16:14:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:01.101 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:37:01.668 16:14:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:01.927 [2024-07-22 16:14:05.964620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:01.927 [2024-07-22 16:14:05.964695] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:37:01.927 [2024-07-22 16:14:05.964716] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:37:01.927 [2024-07-22 16:14:05.964907] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:37:01.927 [2024-07-22 16:14:05.965488] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:37:01.927 [2024-07-22 16:14:05.965511] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:37:01.927 [2024-07-22 16:14:05.965900] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.927 BaseBdev3 00:37:01.927 16:14:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:37:01.927 16:14:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:37:01.927 16:14:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:01.927 16:14:05 -- common/autotest_common.sh@889 -- # local i 00:37:01.927 16:14:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:01.927 16:14:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:01.927 16:14:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:02.186 16:14:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:02.444 [ 00:37:02.444 { 00:37:02.444 "name": "BaseBdev3", 00:37:02.444 "aliases": [ 00:37:02.444 "0c67f282-53b2-40e8-9003-2a6d388e0129" 00:37:02.444 ], 00:37:02.444 "product_name": "Malloc disk", 00:37:02.444 "block_size": 512, 00:37:02.444 "num_blocks": 65536, 00:37:02.444 "uuid": "0c67f282-53b2-40e8-9003-2a6d388e0129", 00:37:02.444 "assigned_rate_limits": { 00:37:02.444 "rw_ios_per_sec": 0, 00:37:02.444 "rw_mbytes_per_sec": 0, 00:37:02.444 "r_mbytes_per_sec": 0, 00:37:02.444 "w_mbytes_per_sec": 0 00:37:02.444 }, 00:37:02.444 "claimed": true, 00:37:02.444 "claim_type": "exclusive_write", 00:37:02.444 "zoned": false, 00:37:02.444 "supported_io_types": { 00:37:02.444 "read": true, 00:37:02.444 "write": true, 00:37:02.444 "unmap": true, 00:37:02.444 "write_zeroes": true, 00:37:02.444 "flush": true, 00:37:02.444 "reset": true, 00:37:02.444 "compare": false, 00:37:02.444 "compare_and_write": false, 00:37:02.444 "abort": true, 00:37:02.444 "nvme_admin": false, 00:37:02.444 "nvme_io": false 00:37:02.444 }, 00:37:02.444 "memory_domains": [ 00:37:02.444 { 00:37:02.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:02.444 "dma_device_type": 2 00:37:02.444 } 00:37:02.444 ], 00:37:02.444 "driver_specific": {} 00:37:02.444 } 00:37:02.444 ] 00:37:02.444 16:14:06 -- common/autotest_common.sh@895 -- # return 0 00:37:02.444 16:14:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.445 16:14:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.704 16:14:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:02.704 "name": "Existed_Raid", 00:37:02.704 "uuid": "a411570e-0ba3-40c5-a778-52350e06f0f6", 00:37:02.704 "strip_size_kb": 0, 00:37:02.704 "state": "online", 00:37:02.704 "raid_level": "raid1", 00:37:02.704 "superblock": false, 00:37:02.704 "num_base_bdevs": 3, 00:37:02.704 "num_base_bdevs_discovered": 3, 00:37:02.704 "num_base_bdevs_operational": 3, 00:37:02.704 "base_bdevs_list": [ 00:37:02.704 { 00:37:02.704 "name": "BaseBdev1", 00:37:02.704 "uuid": "738d9521-777b-4999-9bcf-60c0507438e2", 00:37:02.704 "is_configured": true, 00:37:02.704 "data_offset": 0, 00:37:02.704 "data_size": 65536 00:37:02.704 }, 00:37:02.704 { 00:37:02.704 "name": "BaseBdev2", 00:37:02.704 "uuid": "de548f2f-42f0-46b9-aecb-9e9e6dbf72c5", 00:37:02.704 "is_configured": true, 00:37:02.704 "data_offset": 0, 00:37:02.704 "data_size": 65536 00:37:02.704 }, 00:37:02.704 { 00:37:02.704 "name": "BaseBdev3", 00:37:02.704 "uuid": "0c67f282-53b2-40e8-9003-2a6d388e0129", 00:37:02.704 "is_configured": true, 00:37:02.704 "data_offset": 0, 00:37:02.704 "data_size": 65536 00:37:02.704 } 00:37:02.704 ] 00:37:02.704 }' 00:37:02.704 16:14:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:02.704 16:14:06 -- common/autotest_common.sh@10 -- # set +x 00:37:02.963 16:14:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:03.222 [2024-07-22 16:14:07.309311] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.222 16:14:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:03.481 16:14:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:03.481 "name": "Existed_Raid", 00:37:03.481 "uuid": "a411570e-0ba3-40c5-a778-52350e06f0f6", 00:37:03.481 "strip_size_kb": 0, 00:37:03.481 "state": "online", 00:37:03.481 "raid_level": "raid1", 00:37:03.481 "superblock": false, 00:37:03.481 "num_base_bdevs": 3, 00:37:03.481 "num_base_bdevs_discovered": 2, 00:37:03.481 "num_base_bdevs_operational": 2, 00:37:03.481 "base_bdevs_list": [ 00:37:03.481 { 00:37:03.481 "name": null, 00:37:03.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.481 "is_configured": false, 00:37:03.481 "data_offset": 0, 00:37:03.481 "data_size": 65536 00:37:03.481 }, 00:37:03.481 { 00:37:03.481 "name": "BaseBdev2", 00:37:03.481 "uuid": "de548f2f-42f0-46b9-aecb-9e9e6dbf72c5", 00:37:03.481 "is_configured": true, 00:37:03.481 "data_offset": 0, 00:37:03.481 "data_size": 65536 00:37:03.481 }, 00:37:03.481 { 00:37:03.481 "name": "BaseBdev3", 00:37:03.481 "uuid": "0c67f282-53b2-40e8-9003-2a6d388e0129", 00:37:03.481 "is_configured": true, 00:37:03.481 "data_offset": 0, 00:37:03.481 "data_size": 65536 00:37:03.481 } 00:37:03.481 ] 00:37:03.481 }' 00:37:03.481 16:14:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:03.481 16:14:07 -- common/autotest_common.sh@10 -- # set +x 00:37:03.740 16:14:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:37:03.740 16:14:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:03.740 16:14:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.740 16:14:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:04.306 16:14:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:04.306 16:14:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:04.306 16:14:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:04.306 [2024-07-22 16:14:08.541206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:04.564 16:14:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:04.564 16:14:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:04.564 16:14:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.564 16:14:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:04.822 16:14:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:04.822 16:14:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:04.823 16:14:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:37:05.083 [2024-07-22 16:14:09.152673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:05.083 [2024-07-22 16:14:09.153067] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:05.083 [2024-07-22 16:14:09.153174] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:05.083 [2024-07-22 16:14:09.250631] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:05.083 [2024-07-22 16:14:09.250709] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:37:05.083 16:14:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:05.083 16:14:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:05.083 16:14:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.083 16:14:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:37:05.343 16:14:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:37:05.343 16:14:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:37:05.343 16:14:09 -- bdev/bdev_raid.sh@287 -- # killprocess 74418 00:37:05.343 16:14:09 -- common/autotest_common.sh@926 -- # '[' -z 74418 ']' 00:37:05.343 16:14:09 -- common/autotest_common.sh@930 -- # kill -0 74418 00:37:05.343 16:14:09 -- common/autotest_common.sh@931 -- # uname 00:37:05.343 16:14:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:05.343 16:14:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74418 00:37:05.343 killing process with pid 74418 00:37:05.343 16:14:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:05.343 16:14:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:05.343 16:14:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74418' 00:37:05.343 16:14:09 -- common/autotest_common.sh@945 -- # kill 74418 00:37:05.343 16:14:09 -- common/autotest_common.sh@950 -- # wait 74418 00:37:05.343 [2024-07-22 16:14:09.591972] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:05.343 [2024-07-22 16:14:09.592134] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:06.714 16:14:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:37:06.714 00:37:06.714 real 0m11.776s 00:37:06.714 user 0m19.127s 00:37:06.714 sys 0m2.019s 00:37:06.714 16:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:06.714 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:37:06.714 ************************************ 00:37:06.714 END TEST raid_state_function_test 00:37:06.714 ************************************ 00:37:06.973 16:14:10 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:37:06.973 16:14:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:37:06.973 16:14:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:06.973 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:37:06.973 ************************************ 00:37:06.973 START TEST raid_state_function_test_sb 00:37:06.973 ************************************ 00:37:06.973 16:14:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=74769 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74769' 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:06.973 Process raid pid: 74769 00:37:06.973 16:14:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74769 /var/tmp/spdk-raid.sock 00:37:06.973 16:14:11 -- common/autotest_common.sh@819 -- # '[' -z 74769 ']' 00:37:06.973 16:14:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:06.973 16:14:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:06.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:06.973 16:14:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:06.973 16:14:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:06.973 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:37:06.973 [2024-07-22 16:14:11.090805] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:37:06.973 [2024-07-22 16:14:11.091003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.231 [2024-07-22 16:14:11.275567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.489 [2024-07-22 16:14:11.607997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.747 [2024-07-22 16:14:11.858131] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:08.005 16:14:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:08.005 16:14:12 -- common/autotest_common.sh@852 -- # return 0 00:37:08.005 16:14:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:37:08.263 [2024-07-22 16:14:12.294455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:08.263 [2024-07-22 16:14:12.294557] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:08.263 [2024-07-22 16:14:12.294575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:08.263 [2024-07-22 16:14:12.294592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:08.263 [2024-07-22 16:14:12.294618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:08.263 [2024-07-22 16:14:12.294634] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.263 16:14:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.521 16:14:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:08.521 "name": "Existed_Raid", 00:37:08.521 "uuid": "641cfca9-5c63-4d3d-b6fc-d0058414c6c4", 00:37:08.521 "strip_size_kb": 0, 00:37:08.521 "state": "configuring", 00:37:08.521 "raid_level": "raid1", 00:37:08.521 "superblock": true, 00:37:08.521 "num_base_bdevs": 3, 00:37:08.521 "num_base_bdevs_discovered": 0, 00:37:08.521 "num_base_bdevs_operational": 3, 00:37:08.521 "base_bdevs_list": [ 00:37:08.521 { 00:37:08.521 "name": "BaseBdev1", 00:37:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.521 "is_configured": false, 00:37:08.521 "data_offset": 0, 00:37:08.521 "data_size": 0 00:37:08.521 }, 00:37:08.521 { 00:37:08.521 "name": "BaseBdev2", 00:37:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.521 "is_configured": false, 00:37:08.521 "data_offset": 0, 00:37:08.521 "data_size": 0 00:37:08.521 }, 00:37:08.521 { 00:37:08.521 "name": "BaseBdev3", 00:37:08.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.521 "is_configured": false, 00:37:08.521 "data_offset": 0, 00:37:08.521 "data_size": 0 00:37:08.521 } 00:37:08.521 ] 00:37:08.521 }' 00:37:08.521 16:14:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:08.521 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:37:08.779 16:14:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:09.037 [2024-07-22 16:14:13.106600] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:09.037 [2024-07-22 16:14:13.106681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:37:09.038 16:14:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:37:09.296 [2024-07-22 16:14:13.338792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:09.296 [2024-07-22 16:14:13.338915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:09.296 [2024-07-22 16:14:13.338931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:09.296 [2024-07-22 16:14:13.338952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:09.296 [2024-07-22 16:14:13.338962] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:09.296 [2024-07-22 16:14:13.338978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:09.296 16:14:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:09.554 [2024-07-22 16:14:13.681224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:09.554 BaseBdev1 00:37:09.554 16:14:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:37:09.554 16:14:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:37:09.554 16:14:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:09.554 16:14:13 -- common/autotest_common.sh@889 -- # local i 00:37:09.554 16:14:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:09.554 16:14:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:09.554 16:14:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:09.813 16:14:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:10.072 [ 00:37:10.072 { 00:37:10.072 "name": "BaseBdev1", 00:37:10.072 "aliases": [ 00:37:10.072 "cf107acb-8337-440b-969c-72c555273bd2" 00:37:10.072 ], 00:37:10.072 "product_name": "Malloc disk", 00:37:10.072 "block_size": 512, 00:37:10.072 "num_blocks": 65536, 00:37:10.072 "uuid": "cf107acb-8337-440b-969c-72c555273bd2", 00:37:10.072 "assigned_rate_limits": { 00:37:10.072 "rw_ios_per_sec": 0, 00:37:10.072 "rw_mbytes_per_sec": 0, 00:37:10.072 "r_mbytes_per_sec": 0, 00:37:10.072 "w_mbytes_per_sec": 0 00:37:10.072 }, 00:37:10.072 "claimed": true, 00:37:10.072 "claim_type": "exclusive_write", 00:37:10.072 "zoned": false, 00:37:10.072 "supported_io_types": { 00:37:10.072 "read": true, 00:37:10.072 "write": true, 00:37:10.072 "unmap": true, 00:37:10.072 "write_zeroes": true, 00:37:10.072 "flush": true, 00:37:10.072 "reset": true, 00:37:10.072 "compare": false, 00:37:10.072 "compare_and_write": false, 00:37:10.072 "abort": true, 00:37:10.072 "nvme_admin": false, 00:37:10.072 "nvme_io": false 00:37:10.072 }, 00:37:10.072 "memory_domains": [ 00:37:10.072 { 00:37:10.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.072 "dma_device_type": 2 00:37:10.072 } 00:37:10.072 ], 00:37:10.072 "driver_specific": {} 00:37:10.072 } 00:37:10.072 ] 00:37:10.072 16:14:14 -- common/autotest_common.sh@895 -- # return 0 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.072 16:14:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:10.331 16:14:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:10.331 "name": "Existed_Raid", 00:37:10.331 "uuid": "470a1d79-a10e-43e0-bf2b-8e8c9a8c6e32", 00:37:10.331 "strip_size_kb": 0, 00:37:10.331 "state": "configuring", 00:37:10.331 "raid_level": "raid1", 00:37:10.331 "superblock": true, 00:37:10.331 "num_base_bdevs": 3, 00:37:10.331 "num_base_bdevs_discovered": 1, 00:37:10.331 "num_base_bdevs_operational": 3, 00:37:10.331 "base_bdevs_list": [ 00:37:10.331 { 00:37:10.331 "name": "BaseBdev1", 00:37:10.331 "uuid": "cf107acb-8337-440b-969c-72c555273bd2", 00:37:10.331 "is_configured": true, 00:37:10.331 "data_offset": 2048, 00:37:10.331 "data_size": 63488 00:37:10.331 }, 00:37:10.331 { 00:37:10.331 "name": "BaseBdev2", 00:37:10.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.331 "is_configured": false, 00:37:10.331 "data_offset": 0, 00:37:10.331 "data_size": 0 00:37:10.331 }, 00:37:10.331 { 00:37:10.331 "name": "BaseBdev3", 00:37:10.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.331 "is_configured": false, 00:37:10.331 "data_offset": 0, 00:37:10.331 "data_size": 0 00:37:10.331 } 00:37:10.331 ] 00:37:10.331 }' 00:37:10.331 16:14:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:10.331 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:37:10.589 16:14:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:10.847 [2024-07-22 16:14:15.006338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:10.847 [2024-07-22 16:14:15.006450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:37:10.847 16:14:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:37:10.847 16:14:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:11.106 16:14:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:11.364 BaseBdev1 00:37:11.364 16:14:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:37:11.364 16:14:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:37:11.364 16:14:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:11.364 16:14:15 -- common/autotest_common.sh@889 -- # local i 00:37:11.364 16:14:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:11.364 16:14:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:11.364 16:14:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:11.623 16:14:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:11.906 [ 00:37:11.906 { 00:37:11.907 "name": "BaseBdev1", 00:37:11.907 "aliases": [ 00:37:11.907 "2cca8d08-8a8a-476b-89ef-6c3d6df0f61f" 00:37:11.907 ], 00:37:11.907 "product_name": "Malloc disk", 00:37:11.907 "block_size": 512, 00:37:11.907 "num_blocks": 65536, 00:37:11.907 "uuid": "2cca8d08-8a8a-476b-89ef-6c3d6df0f61f", 00:37:11.907 "assigned_rate_limits": { 00:37:11.907 "rw_ios_per_sec": 0, 00:37:11.907 "rw_mbytes_per_sec": 0, 00:37:11.907 "r_mbytes_per_sec": 0, 00:37:11.907 "w_mbytes_per_sec": 0 00:37:11.907 }, 00:37:11.907 "claimed": false, 00:37:11.907 "zoned": false, 00:37:11.907 "supported_io_types": { 00:37:11.907 "read": true, 00:37:11.907 "write": true, 00:37:11.907 "unmap": true, 00:37:11.907 "write_zeroes": true, 00:37:11.907 "flush": true, 00:37:11.907 "reset": true, 00:37:11.907 "compare": false, 00:37:11.907 "compare_and_write": false, 00:37:11.907 "abort": true, 00:37:11.907 "nvme_admin": false, 00:37:11.907 "nvme_io": false 00:37:11.907 }, 00:37:11.907 "memory_domains": [ 00:37:11.907 { 00:37:11.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:11.907 "dma_device_type": 2 00:37:11.907 } 00:37:11.907 ], 00:37:11.907 "driver_specific": {} 00:37:11.907 } 00:37:11.907 ] 00:37:11.907 16:14:16 -- common/autotest_common.sh@895 -- # return 0 00:37:11.907 16:14:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:37:12.165 [2024-07-22 16:14:16.283920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:12.165 [2024-07-22 16:14:16.286647] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:12.165 [2024-07-22 16:14:16.286727] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:12.165 [2024-07-22 16:14:16.286743] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:12.165 [2024-07-22 16:14:16.286761] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:12.165 16:14:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:12.424 16:14:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:12.424 "name": "Existed_Raid", 00:37:12.424 "uuid": "06623bdb-8ac9-444c-87fb-30889fe2db8a", 00:37:12.424 "strip_size_kb": 0, 00:37:12.424 "state": "configuring", 00:37:12.424 "raid_level": "raid1", 00:37:12.424 "superblock": true, 00:37:12.424 "num_base_bdevs": 3, 00:37:12.424 "num_base_bdevs_discovered": 1, 00:37:12.424 "num_base_bdevs_operational": 3, 00:37:12.424 "base_bdevs_list": [ 00:37:12.424 { 00:37:12.424 "name": "BaseBdev1", 00:37:12.424 "uuid": "2cca8d08-8a8a-476b-89ef-6c3d6df0f61f", 00:37:12.424 "is_configured": true, 00:37:12.424 "data_offset": 2048, 00:37:12.424 "data_size": 63488 00:37:12.424 }, 00:37:12.424 { 00:37:12.424 "name": "BaseBdev2", 00:37:12.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:12.424 "is_configured": false, 00:37:12.424 "data_offset": 0, 00:37:12.424 "data_size": 0 00:37:12.424 }, 00:37:12.424 { 00:37:12.424 "name": "BaseBdev3", 00:37:12.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:12.424 "is_configured": false, 00:37:12.424 "data_offset": 0, 00:37:12.424 "data_size": 0 00:37:12.424 } 00:37:12.424 ] 00:37:12.424 }' 00:37:12.424 16:14:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:12.424 16:14:16 -- common/autotest_common.sh@10 -- # set +x 00:37:12.682 16:14:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:13.248 [2024-07-22 16:14:17.240812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:13.248 BaseBdev2 00:37:13.248 16:14:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:37:13.248 16:14:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:37:13.248 16:14:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:13.248 16:14:17 -- common/autotest_common.sh@889 -- # local i 00:37:13.248 16:14:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:13.248 16:14:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:13.248 16:14:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:13.248 16:14:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:13.507 [ 00:37:13.507 { 00:37:13.507 "name": "BaseBdev2", 00:37:13.507 "aliases": [ 00:37:13.507 "89c3da54-9278-4605-91ef-fb8b45bf3b70" 00:37:13.507 ], 00:37:13.507 "product_name": "Malloc disk", 00:37:13.507 "block_size": 512, 00:37:13.507 "num_blocks": 65536, 00:37:13.507 "uuid": "89c3da54-9278-4605-91ef-fb8b45bf3b70", 00:37:13.507 "assigned_rate_limits": { 00:37:13.507 "rw_ios_per_sec": 0, 00:37:13.507 "rw_mbytes_per_sec": 0, 00:37:13.507 "r_mbytes_per_sec": 0, 00:37:13.507 "w_mbytes_per_sec": 0 00:37:13.507 }, 00:37:13.507 "claimed": true, 00:37:13.507 "claim_type": "exclusive_write", 00:37:13.507 "zoned": false, 00:37:13.507 "supported_io_types": { 00:37:13.507 "read": true, 00:37:13.507 "write": true, 00:37:13.507 "unmap": true, 00:37:13.507 "write_zeroes": true, 00:37:13.507 "flush": true, 00:37:13.507 "reset": true, 00:37:13.507 "compare": false, 00:37:13.507 "compare_and_write": false, 00:37:13.507 "abort": true, 00:37:13.507 "nvme_admin": false, 00:37:13.507 "nvme_io": false 00:37:13.507 }, 00:37:13.507 "memory_domains": [ 00:37:13.507 { 00:37:13.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.507 "dma_device_type": 2 00:37:13.507 } 00:37:13.507 ], 00:37:13.507 "driver_specific": {} 00:37:13.507 } 00:37:13.507 ] 00:37:13.507 16:14:17 -- common/autotest_common.sh@895 -- # return 0 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.507 16:14:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:13.766 16:14:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:13.766 "name": "Existed_Raid", 00:37:13.766 "uuid": "06623bdb-8ac9-444c-87fb-30889fe2db8a", 00:37:13.766 "strip_size_kb": 0, 00:37:13.766 "state": "configuring", 00:37:13.766 "raid_level": "raid1", 00:37:13.766 "superblock": true, 00:37:13.766 "num_base_bdevs": 3, 00:37:13.766 "num_base_bdevs_discovered": 2, 00:37:13.766 "num_base_bdevs_operational": 3, 00:37:13.766 "base_bdevs_list": [ 00:37:13.766 { 00:37:13.766 "name": "BaseBdev1", 00:37:13.766 "uuid": "2cca8d08-8a8a-476b-89ef-6c3d6df0f61f", 00:37:13.766 "is_configured": true, 00:37:13.766 "data_offset": 2048, 00:37:13.766 "data_size": 63488 00:37:13.766 }, 00:37:13.766 { 00:37:13.766 "name": "BaseBdev2", 00:37:13.766 "uuid": "89c3da54-9278-4605-91ef-fb8b45bf3b70", 00:37:13.766 "is_configured": true, 00:37:13.766 "data_offset": 2048, 00:37:13.766 "data_size": 63488 00:37:13.766 }, 00:37:13.766 { 00:37:13.766 "name": "BaseBdev3", 00:37:13.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.766 "is_configured": false, 00:37:13.766 "data_offset": 0, 00:37:13.766 "data_size": 0 00:37:13.766 } 00:37:13.766 ] 00:37:13.766 }' 00:37:13.766 16:14:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:13.766 16:14:18 -- common/autotest_common.sh@10 -- # set +x 00:37:14.334 16:14:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:14.334 [2024-07-22 16:14:18.580432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:14.334 [2024-07-22 16:14:18.580799] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:37:14.334 [2024-07-22 16:14:18.580839] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:14.334 [2024-07-22 16:14:18.581046] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:37:14.334 [2024-07-22 16:14:18.581516] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:37:14.334 [2024-07-22 16:14:18.581543] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:37:14.334 [2024-07-22 16:14:18.581758] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.334 BaseBdev3 00:37:14.334 16:14:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:37:14.334 16:14:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:37:14.334 16:14:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:14.334 16:14:18 -- common/autotest_common.sh@889 -- # local i 00:37:14.334 16:14:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:14.334 16:14:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:14.334 16:14:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:14.592 16:14:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:14.851 [ 00:37:14.851 { 00:37:14.851 "name": "BaseBdev3", 00:37:14.851 "aliases": [ 00:37:14.851 "fd52e4b1-a231-4587-bb06-ce3de26ced7f" 00:37:14.851 ], 00:37:14.851 "product_name": "Malloc disk", 00:37:14.851 "block_size": 512, 00:37:14.851 "num_blocks": 65536, 00:37:14.851 "uuid": "fd52e4b1-a231-4587-bb06-ce3de26ced7f", 00:37:14.851 "assigned_rate_limits": { 00:37:14.851 "rw_ios_per_sec": 0, 00:37:14.851 "rw_mbytes_per_sec": 0, 00:37:14.851 "r_mbytes_per_sec": 0, 00:37:14.851 "w_mbytes_per_sec": 0 00:37:14.851 }, 00:37:14.851 "claimed": true, 00:37:14.851 "claim_type": "exclusive_write", 00:37:14.851 "zoned": false, 00:37:14.851 "supported_io_types": { 00:37:14.851 "read": true, 00:37:14.851 "write": true, 00:37:14.851 "unmap": true, 00:37:14.851 "write_zeroes": true, 00:37:14.851 "flush": true, 00:37:14.851 "reset": true, 00:37:14.851 "compare": false, 00:37:14.851 "compare_and_write": false, 00:37:14.851 "abort": true, 00:37:14.851 "nvme_admin": false, 00:37:14.851 "nvme_io": false 00:37:14.851 }, 00:37:14.851 "memory_domains": [ 00:37:14.851 { 00:37:14.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:14.851 "dma_device_type": 2 00:37:14.851 } 00:37:14.851 ], 00:37:14.851 "driver_specific": {} 00:37:14.851 } 00:37:14.851 ] 00:37:14.851 16:14:19 -- common/autotest_common.sh@895 -- # return 0 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.851 16:14:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:15.110 16:14:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:15.110 "name": "Existed_Raid", 00:37:15.110 "uuid": "06623bdb-8ac9-444c-87fb-30889fe2db8a", 00:37:15.110 "strip_size_kb": 0, 00:37:15.110 "state": "online", 00:37:15.110 "raid_level": "raid1", 00:37:15.110 "superblock": true, 00:37:15.110 "num_base_bdevs": 3, 00:37:15.110 "num_base_bdevs_discovered": 3, 00:37:15.110 "num_base_bdevs_operational": 3, 00:37:15.110 "base_bdevs_list": [ 00:37:15.110 { 00:37:15.110 "name": "BaseBdev1", 00:37:15.110 "uuid": "2cca8d08-8a8a-476b-89ef-6c3d6df0f61f", 00:37:15.110 "is_configured": true, 00:37:15.110 "data_offset": 2048, 00:37:15.110 "data_size": 63488 00:37:15.110 }, 00:37:15.110 { 00:37:15.110 "name": "BaseBdev2", 00:37:15.110 "uuid": "89c3da54-9278-4605-91ef-fb8b45bf3b70", 00:37:15.110 "is_configured": true, 00:37:15.110 "data_offset": 2048, 00:37:15.110 "data_size": 63488 00:37:15.110 }, 00:37:15.110 { 00:37:15.110 "name": "BaseBdev3", 00:37:15.110 "uuid": "fd52e4b1-a231-4587-bb06-ce3de26ced7f", 00:37:15.110 "is_configured": true, 00:37:15.110 "data_offset": 2048, 00:37:15.110 "data_size": 63488 00:37:15.110 } 00:37:15.110 ] 00:37:15.110 }' 00:37:15.110 16:14:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:15.110 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:37:15.368 16:14:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:15.626 [2024-07-22 16:14:19.869205] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:15.885 16:14:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.143 16:14:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:16.143 "name": "Existed_Raid", 00:37:16.143 "uuid": "06623bdb-8ac9-444c-87fb-30889fe2db8a", 00:37:16.143 "strip_size_kb": 0, 00:37:16.143 "state": "online", 00:37:16.143 "raid_level": "raid1", 00:37:16.143 "superblock": true, 00:37:16.143 "num_base_bdevs": 3, 00:37:16.143 "num_base_bdevs_discovered": 2, 00:37:16.143 "num_base_bdevs_operational": 2, 00:37:16.143 "base_bdevs_list": [ 00:37:16.143 { 00:37:16.143 "name": null, 00:37:16.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:16.143 "is_configured": false, 00:37:16.143 "data_offset": 2048, 00:37:16.143 "data_size": 63488 00:37:16.143 }, 00:37:16.143 { 00:37:16.143 "name": "BaseBdev2", 00:37:16.143 "uuid": "89c3da54-9278-4605-91ef-fb8b45bf3b70", 00:37:16.143 "is_configured": true, 00:37:16.143 "data_offset": 2048, 00:37:16.143 "data_size": 63488 00:37:16.143 }, 00:37:16.143 { 00:37:16.143 "name": "BaseBdev3", 00:37:16.143 "uuid": "fd52e4b1-a231-4587-bb06-ce3de26ced7f", 00:37:16.143 "is_configured": true, 00:37:16.143 "data_offset": 2048, 00:37:16.143 "data_size": 63488 00:37:16.143 } 00:37:16.143 ] 00:37:16.143 }' 00:37:16.143 16:14:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:16.143 16:14:20 -- common/autotest_common.sh@10 -- # set +x 00:37:16.401 16:14:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:37:16.401 16:14:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:16.401 16:14:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:16.401 16:14:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.660 16:14:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:16.660 16:14:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:16.660 16:14:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:16.919 [2024-07-22 16:14:21.107719] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:17.177 16:14:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:17.177 16:14:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:17.177 16:14:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.177 16:14:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:17.435 16:14:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:17.435 16:14:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:17.435 16:14:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:37:17.435 [2024-07-22 16:14:21.690843] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:17.435 [2024-07-22 16:14:21.690931] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:17.435 [2024-07-22 16:14:21.691003] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:17.694 [2024-07-22 16:14:21.786114] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:17.694 [2024-07-22 16:14:21.786177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:37:17.694 16:14:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:17.694 16:14:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:17.694 16:14:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.694 16:14:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:37:17.953 16:14:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:37:17.953 16:14:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:37:17.953 16:14:22 -- bdev/bdev_raid.sh@287 -- # killprocess 74769 00:37:17.953 16:14:22 -- common/autotest_common.sh@926 -- # '[' -z 74769 ']' 00:37:17.953 16:14:22 -- common/autotest_common.sh@930 -- # kill -0 74769 00:37:17.953 16:14:22 -- common/autotest_common.sh@931 -- # uname 00:37:17.953 16:14:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:17.953 16:14:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74769 00:37:17.953 killing process with pid 74769 00:37:17.953 16:14:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:17.953 16:14:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:17.953 16:14:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74769' 00:37:17.953 16:14:22 -- common/autotest_common.sh@945 -- # kill 74769 00:37:17.953 16:14:22 -- common/autotest_common.sh@950 -- # wait 74769 00:37:17.953 [2024-07-22 16:14:22.060921] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:17.953 [2024-07-22 16:14:22.061133] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:19.393 ************************************ 00:37:19.393 END TEST raid_state_function_test_sb 00:37:19.393 ************************************ 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:37:19.393 00:37:19.393 real 0m12.386s 00:37:19.393 user 0m20.215s 00:37:19.393 sys 0m2.027s 00:37:19.393 16:14:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:19.393 16:14:23 -- common/autotest_common.sh@10 -- # set +x 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:37:19.393 16:14:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:37:19.393 16:14:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:19.393 16:14:23 -- common/autotest_common.sh@10 -- # set +x 00:37:19.393 ************************************ 00:37:19.393 START TEST raid_superblock_test 00:37:19.393 ************************************ 00:37:19.393 16:14:23 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:37:19.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=75136 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 75136 /var/tmp/spdk-raid.sock 00:37:19.393 16:14:23 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:19.393 16:14:23 -- common/autotest_common.sh@819 -- # '[' -z 75136 ']' 00:37:19.393 16:14:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:19.393 16:14:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:19.393 16:14:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:19.393 16:14:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:19.393 16:14:23 -- common/autotest_common.sh@10 -- # set +x 00:37:19.393 [2024-07-22 16:14:23.524263] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:37:19.393 [2024-07-22 16:14:23.524445] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75136 ] 00:37:19.652 [2024-07-22 16:14:23.695369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:19.910 [2024-07-22 16:14:23.972746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.169 [2024-07-22 16:14:24.199867] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:20.427 16:14:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:20.427 16:14:24 -- common/autotest_common.sh@852 -- # return 0 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:20.427 16:14:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:37:20.686 malloc1 00:37:20.686 16:14:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:20.944 [2024-07-22 16:14:25.000933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:20.944 [2024-07-22 16:14:25.001290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.944 [2024-07-22 16:14:25.001354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:37:20.944 [2024-07-22 16:14:25.001373] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.944 [2024-07-22 16:14:25.004455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.944 [2024-07-22 16:14:25.004648] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:20.944 pt1 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:20.944 16:14:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:37:21.202 malloc2 00:37:21.202 16:14:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:21.461 [2024-07-22 16:14:25.593529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:21.461 [2024-07-22 16:14:25.593909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.461 [2024-07-22 16:14:25.593979] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:37:21.461 [2024-07-22 16:14:25.594019] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.461 [2024-07-22 16:14:25.597217] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.461 [2024-07-22 16:14:25.597265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:21.461 pt2 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:21.461 16:14:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:37:21.719 malloc3 00:37:21.719 16:14:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:21.977 [2024-07-22 16:14:26.162082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:21.977 [2024-07-22 16:14:26.162457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.977 [2024-07-22 16:14:26.162519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:37:21.977 [2024-07-22 16:14:26.162538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.977 [2024-07-22 16:14:26.165595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.977 [2024-07-22 16:14:26.165797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:21.977 pt3 00:37:21.977 16:14:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:37:21.977 16:14:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:37:21.977 16:14:26 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:37:22.235 [2024-07-22 16:14:26.390285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:22.235 [2024-07-22 16:14:26.393031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:22.235 [2024-07-22 16:14:26.393129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:22.235 [2024-07-22 16:14:26.393407] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:37:22.235 [2024-07-22 16:14:26.393435] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:22.235 [2024-07-22 16:14:26.393597] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:37:22.235 [2024-07-22 16:14:26.394101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:37:22.235 [2024-07-22 16:14:26.394126] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:37:22.235 [2024-07-22 16:14:26.394386] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.235 16:14:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.496 16:14:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:22.496 "name": "raid_bdev1", 00:37:22.496 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:22.496 "strip_size_kb": 0, 00:37:22.496 "state": "online", 00:37:22.496 "raid_level": "raid1", 00:37:22.496 "superblock": true, 00:37:22.496 "num_base_bdevs": 3, 00:37:22.496 "num_base_bdevs_discovered": 3, 00:37:22.496 "num_base_bdevs_operational": 3, 00:37:22.496 "base_bdevs_list": [ 00:37:22.496 { 00:37:22.496 "name": "pt1", 00:37:22.496 "uuid": "76882ab4-58db-54d6-8b27-e011149c0276", 00:37:22.496 "is_configured": true, 00:37:22.496 "data_offset": 2048, 00:37:22.496 "data_size": 63488 00:37:22.496 }, 00:37:22.496 { 00:37:22.496 "name": "pt2", 00:37:22.496 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:22.496 "is_configured": true, 00:37:22.496 "data_offset": 2048, 00:37:22.496 "data_size": 63488 00:37:22.496 }, 00:37:22.496 { 00:37:22.496 "name": "pt3", 00:37:22.496 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:22.496 "is_configured": true, 00:37:22.496 "data_offset": 2048, 00:37:22.496 "data_size": 63488 00:37:22.496 } 00:37:22.496 ] 00:37:22.496 }' 00:37:22.496 16:14:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:22.496 16:14:26 -- common/autotest_common.sh@10 -- # set +x 00:37:23.062 16:14:27 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:23.062 16:14:27 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:37:23.062 [2024-07-22 16:14:27.332183] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:23.321 16:14:27 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1e5425c6-7673-445d-af83-e85c21f18940 00:37:23.321 16:14:27 -- bdev/bdev_raid.sh@380 -- # '[' -z 1e5425c6-7673-445d-af83-e85c21f18940 ']' 00:37:23.321 16:14:27 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:23.321 [2024-07-22 16:14:27.591871] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:23.321 [2024-07-22 16:14:27.592119] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:23.321 [2024-07-22 16:14:27.592268] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:23.321 [2024-07-22 16:14:27.592404] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:23.321 [2024-07-22 16:14:27.592433] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:37:23.579 16:14:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.579 16:14:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:37:23.837 16:14:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:37:23.837 16:14:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:37:23.837 16:14:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:37:23.837 16:14:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:24.095 16:14:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:37:24.095 16:14:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:24.354 16:14:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:37:24.354 16:14:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:37:24.615 16:14:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:24.615 16:14:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:24.873 16:14:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:37:24.873 16:14:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:37:24.873 16:14:28 -- common/autotest_common.sh@640 -- # local es=0 00:37:24.873 16:14:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:37:24.873 16:14:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:24.873 16:14:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:24.873 16:14:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:24.873 16:14:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:24.873 16:14:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:24.873 16:14:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:37:24.873 16:14:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:24.873 16:14:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:24.873 16:14:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:37:25.131 [2024-07-22 16:14:29.216238] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:25.131 [2024-07-22 16:14:29.219307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:25.131 [2024-07-22 16:14:29.219380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:25.131 [2024-07-22 16:14:29.219476] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:37:25.131 [2024-07-22 16:14:29.219560] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:37:25.131 [2024-07-22 16:14:29.219605] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:37:25.132 [2024-07-22 16:14:29.219629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:25.132 [2024-07-22 16:14:29.219648] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:37:25.132 request: 00:37:25.132 { 00:37:25.132 "name": "raid_bdev1", 00:37:25.132 "raid_level": "raid1", 00:37:25.132 "base_bdevs": [ 00:37:25.132 "malloc1", 00:37:25.132 "malloc2", 00:37:25.132 "malloc3" 00:37:25.132 ], 00:37:25.132 "superblock": false, 00:37:25.132 "method": "bdev_raid_create", 00:37:25.132 "req_id": 1 00:37:25.132 } 00:37:25.132 Got JSON-RPC error response 00:37:25.132 response: 00:37:25.132 { 00:37:25.132 "code": -17, 00:37:25.132 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:25.132 } 00:37:25.132 16:14:29 -- common/autotest_common.sh@643 -- # es=1 00:37:25.132 16:14:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:37:25.132 16:14:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:37:25.132 16:14:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:37:25.132 16:14:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:37:25.132 16:14:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.390 16:14:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:37:25.390 16:14:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:37:25.390 16:14:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:25.648 [2024-07-22 16:14:29.752481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:25.648 [2024-07-22 16:14:29.752633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:25.648 [2024-07-22 16:14:29.752681] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:37:25.648 [2024-07-22 16:14:29.752707] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:25.648 [2024-07-22 16:14:29.755843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:25.648 [2024-07-22 16:14:29.756400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:25.648 [2024-07-22 16:14:29.756564] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:37:25.648 [2024-07-22 16:14:29.756652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:25.648 pt1 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.648 16:14:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.906 16:14:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:25.906 "name": "raid_bdev1", 00:37:25.906 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:25.906 "strip_size_kb": 0, 00:37:25.906 "state": "configuring", 00:37:25.906 "raid_level": "raid1", 00:37:25.906 "superblock": true, 00:37:25.906 "num_base_bdevs": 3, 00:37:25.906 "num_base_bdevs_discovered": 1, 00:37:25.906 "num_base_bdevs_operational": 3, 00:37:25.906 "base_bdevs_list": [ 00:37:25.906 { 00:37:25.906 "name": "pt1", 00:37:25.906 "uuid": "76882ab4-58db-54d6-8b27-e011149c0276", 00:37:25.906 "is_configured": true, 00:37:25.906 "data_offset": 2048, 00:37:25.906 "data_size": 63488 00:37:25.906 }, 00:37:25.906 { 00:37:25.906 "name": null, 00:37:25.906 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:25.906 "is_configured": false, 00:37:25.906 "data_offset": 2048, 00:37:25.906 "data_size": 63488 00:37:25.906 }, 00:37:25.906 { 00:37:25.906 "name": null, 00:37:25.906 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:25.906 "is_configured": false, 00:37:25.906 "data_offset": 2048, 00:37:25.906 "data_size": 63488 00:37:25.906 } 00:37:25.906 ] 00:37:25.906 }' 00:37:25.906 16:14:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:25.906 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:37:26.165 16:14:30 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:37:26.165 16:14:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:26.423 [2024-07-22 16:14:30.677073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:26.423 [2024-07-22 16:14:30.677225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.423 [2024-07-22 16:14:30.677289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:37:26.423 [2024-07-22 16:14:30.677316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.423 [2024-07-22 16:14:30.677956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.423 [2024-07-22 16:14:30.678012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:26.423 [2024-07-22 16:14:30.678135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:37:26.423 [2024-07-22 16:14:30.678192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:26.423 pt2 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:26.682 [2024-07-22 16:14:30.913146] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.682 16:14:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.940 16:14:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:26.940 "name": "raid_bdev1", 00:37:26.940 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:26.940 "strip_size_kb": 0, 00:37:26.940 "state": "configuring", 00:37:26.940 "raid_level": "raid1", 00:37:26.940 "superblock": true, 00:37:26.940 "num_base_bdevs": 3, 00:37:26.940 "num_base_bdevs_discovered": 1, 00:37:26.940 "num_base_bdevs_operational": 3, 00:37:26.940 "base_bdevs_list": [ 00:37:26.940 { 00:37:26.940 "name": "pt1", 00:37:26.940 "uuid": "76882ab4-58db-54d6-8b27-e011149c0276", 00:37:26.940 "is_configured": true, 00:37:26.940 "data_offset": 2048, 00:37:26.940 "data_size": 63488 00:37:26.940 }, 00:37:26.940 { 00:37:26.940 "name": null, 00:37:26.940 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:26.940 "is_configured": false, 00:37:26.940 "data_offset": 2048, 00:37:26.940 "data_size": 63488 00:37:26.940 }, 00:37:26.940 { 00:37:26.940 "name": null, 00:37:26.940 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:26.940 "is_configured": false, 00:37:26.940 "data_offset": 2048, 00:37:26.940 "data_size": 63488 00:37:26.940 } 00:37:26.940 ] 00:37:26.940 }' 00:37:26.940 16:14:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:26.940 16:14:31 -- common/autotest_common.sh@10 -- # set +x 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:27.532 [2024-07-22 16:14:31.773256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:27.532 [2024-07-22 16:14:31.773609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:27.532 [2024-07-22 16:14:31.773661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:37:27.532 [2024-07-22 16:14:31.773679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:27.532 [2024-07-22 16:14:31.774363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:27.532 [2024-07-22 16:14:31.774390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:27.532 [2024-07-22 16:14:31.774523] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:37:27.532 [2024-07-22 16:14:31.774564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:27.532 pt2 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:37:27.532 16:14:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:27.790 [2024-07-22 16:14:32.049387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:27.790 [2024-07-22 16:14:32.049734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:27.790 [2024-07-22 16:14:32.049831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:37:27.790 [2024-07-22 16:14:32.049963] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:27.790 [2024-07-22 16:14:32.050663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:27.790 [2024-07-22 16:14:32.050819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:27.790 [2024-07-22 16:14:32.051096] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:37:27.790 [2024-07-22 16:14:32.051262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:27.790 [2024-07-22 16:14:32.051620] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:37:27.790 [2024-07-22 16:14:32.051753] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:27.790 [2024-07-22 16:14:32.051920] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:37:27.790 [2024-07-22 16:14:32.052472] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:37:27.790 [2024-07-22 16:14:32.052613] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:37:27.790 [2024-07-22 16:14:32.052899] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:27.790 pt3 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.048 16:14:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.307 16:14:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:28.307 "name": "raid_bdev1", 00:37:28.307 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:28.307 "strip_size_kb": 0, 00:37:28.307 "state": "online", 00:37:28.307 "raid_level": "raid1", 00:37:28.307 "superblock": true, 00:37:28.307 "num_base_bdevs": 3, 00:37:28.307 "num_base_bdevs_discovered": 3, 00:37:28.307 "num_base_bdevs_operational": 3, 00:37:28.307 "base_bdevs_list": [ 00:37:28.307 { 00:37:28.307 "name": "pt1", 00:37:28.307 "uuid": "76882ab4-58db-54d6-8b27-e011149c0276", 00:37:28.307 "is_configured": true, 00:37:28.307 "data_offset": 2048, 00:37:28.307 "data_size": 63488 00:37:28.307 }, 00:37:28.307 { 00:37:28.307 "name": "pt2", 00:37:28.307 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:28.307 "is_configured": true, 00:37:28.307 "data_offset": 2048, 00:37:28.307 "data_size": 63488 00:37:28.307 }, 00:37:28.307 { 00:37:28.307 "name": "pt3", 00:37:28.307 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:28.307 "is_configured": true, 00:37:28.307 "data_offset": 2048, 00:37:28.307 "data_size": 63488 00:37:28.307 } 00:37:28.307 ] 00:37:28.307 }' 00:37:28.307 16:14:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:28.307 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:37:28.565 16:14:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:28.565 16:14:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:37:28.823 [2024-07-22 16:14:32.898014] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:28.823 16:14:32 -- bdev/bdev_raid.sh@430 -- # '[' 1e5425c6-7673-445d-af83-e85c21f18940 '!=' 1e5425c6-7673-445d-af83-e85c21f18940 ']' 00:37:28.823 16:14:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:37:28.823 16:14:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:37:28.823 16:14:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:37:28.823 16:14:32 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:29.081 [2024-07-22 16:14:33.177792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.081 16:14:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.340 16:14:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:29.340 "name": "raid_bdev1", 00:37:29.340 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:29.340 "strip_size_kb": 0, 00:37:29.340 "state": "online", 00:37:29.340 "raid_level": "raid1", 00:37:29.340 "superblock": true, 00:37:29.340 "num_base_bdevs": 3, 00:37:29.340 "num_base_bdevs_discovered": 2, 00:37:29.340 "num_base_bdevs_operational": 2, 00:37:29.340 "base_bdevs_list": [ 00:37:29.340 { 00:37:29.340 "name": null, 00:37:29.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.340 "is_configured": false, 00:37:29.340 "data_offset": 2048, 00:37:29.340 "data_size": 63488 00:37:29.340 }, 00:37:29.340 { 00:37:29.340 "name": "pt2", 00:37:29.340 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:29.340 "is_configured": true, 00:37:29.340 "data_offset": 2048, 00:37:29.340 "data_size": 63488 00:37:29.340 }, 00:37:29.340 { 00:37:29.340 "name": "pt3", 00:37:29.340 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:29.340 "is_configured": true, 00:37:29.340 "data_offset": 2048, 00:37:29.340 "data_size": 63488 00:37:29.340 } 00:37:29.340 ] 00:37:29.340 }' 00:37:29.340 16:14:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:29.340 16:14:33 -- common/autotest_common.sh@10 -- # set +x 00:37:29.599 16:14:33 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:29.857 [2024-07-22 16:14:34.101948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:29.857 [2024-07-22 16:14:34.102026] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:29.857 [2024-07-22 16:14:34.102159] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:29.857 [2024-07-22 16:14:34.102262] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:29.857 [2024-07-22 16:14:34.102294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:37:30.116 16:14:34 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:37:30.116 16:14:34 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:30.374 16:14:34 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:37:30.374 16:14:34 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:37:30.374 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:37:30.374 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:37:30.374 16:14:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:30.633 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:37:30.633 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:37:30.633 16:14:34 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:37:30.892 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:37:30.892 16:14:34 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:37:30.892 16:14:34 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:37:30.892 16:14:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:37:30.892 16:14:34 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:30.892 [2024-07-22 16:14:35.138163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:30.892 [2024-07-22 16:14:35.138273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.892 [2024-07-22 16:14:35.138311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:37:30.892 [2024-07-22 16:14:35.138335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.892 [2024-07-22 16:14:35.141385] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.892 [2024-07-22 16:14:35.141434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:30.892 [2024-07-22 16:14:35.141560] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:37:30.892 [2024-07-22 16:14:35.141640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:30.892 pt2 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:30.892 16:14:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:31.151 16:14:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.151 16:14:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.409 16:14:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:31.409 "name": "raid_bdev1", 00:37:31.409 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:31.409 "strip_size_kb": 0, 00:37:31.409 "state": "configuring", 00:37:31.409 "raid_level": "raid1", 00:37:31.409 "superblock": true, 00:37:31.409 "num_base_bdevs": 3, 00:37:31.409 "num_base_bdevs_discovered": 1, 00:37:31.409 "num_base_bdevs_operational": 2, 00:37:31.409 "base_bdevs_list": [ 00:37:31.409 { 00:37:31.409 "name": null, 00:37:31.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.409 "is_configured": false, 00:37:31.409 "data_offset": 2048, 00:37:31.409 "data_size": 63488 00:37:31.409 }, 00:37:31.409 { 00:37:31.409 "name": "pt2", 00:37:31.409 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:31.409 "is_configured": true, 00:37:31.409 "data_offset": 2048, 00:37:31.409 "data_size": 63488 00:37:31.409 }, 00:37:31.409 { 00:37:31.409 "name": null, 00:37:31.409 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:31.409 "is_configured": false, 00:37:31.409 "data_offset": 2048, 00:37:31.409 "data_size": 63488 00:37:31.409 } 00:37:31.409 ] 00:37:31.409 }' 00:37:31.409 16:14:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:31.409 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:37:31.669 16:14:35 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:37:31.669 16:14:35 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:37:31.669 16:14:35 -- bdev/bdev_raid.sh@462 -- # i=2 00:37:31.669 16:14:35 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:31.927 [2024-07-22 16:14:36.026496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:31.927 [2024-07-22 16:14:36.026636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:31.927 [2024-07-22 16:14:36.026710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:37:31.927 [2024-07-22 16:14:36.026746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:31.927 [2024-07-22 16:14:36.027637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:31.927 [2024-07-22 16:14:36.027710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:31.927 [2024-07-22 16:14:36.027876] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:37:31.927 [2024-07-22 16:14:36.027929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:31.927 [2024-07-22 16:14:36.028188] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:37:31.927 [2024-07-22 16:14:36.028223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:31.927 [2024-07-22 16:14:36.028384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:37:31.927 [2024-07-22 16:14:36.029045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:37:31.927 [2024-07-22 16:14:36.029087] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:37:31.927 [2024-07-22 16:14:36.029347] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.927 pt3 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:31.927 16:14:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:31.928 16:14:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:31.928 16:14:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:31.928 16:14:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.928 16:14:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.186 16:14:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:32.186 "name": "raid_bdev1", 00:37:32.186 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:32.186 "strip_size_kb": 0, 00:37:32.186 "state": "online", 00:37:32.186 "raid_level": "raid1", 00:37:32.186 "superblock": true, 00:37:32.186 "num_base_bdevs": 3, 00:37:32.186 "num_base_bdevs_discovered": 2, 00:37:32.186 "num_base_bdevs_operational": 2, 00:37:32.186 "base_bdevs_list": [ 00:37:32.186 { 00:37:32.186 "name": null, 00:37:32.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.186 "is_configured": false, 00:37:32.186 "data_offset": 2048, 00:37:32.186 "data_size": 63488 00:37:32.186 }, 00:37:32.186 { 00:37:32.186 "name": "pt2", 00:37:32.186 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:32.186 "is_configured": true, 00:37:32.186 "data_offset": 2048, 00:37:32.186 "data_size": 63488 00:37:32.186 }, 00:37:32.186 { 00:37:32.186 "name": "pt3", 00:37:32.186 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:32.186 "is_configured": true, 00:37:32.186 "data_offset": 2048, 00:37:32.186 "data_size": 63488 00:37:32.186 } 00:37:32.186 ] 00:37:32.186 }' 00:37:32.186 16:14:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:32.186 16:14:36 -- common/autotest_common.sh@10 -- # set +x 00:37:32.445 16:14:36 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:37:32.445 16:14:36 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:33.012 [2024-07-22 16:14:37.002656] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:33.012 [2024-07-22 16:14:37.002711] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:33.012 [2024-07-22 16:14:37.002811] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:33.012 [2024-07-22 16:14:37.002897] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:33.012 [2024-07-22 16:14:37.002913] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:37:33.012 16:14:37 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.012 16:14:37 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:37:33.270 16:14:37 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:37:33.270 16:14:37 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:37:33.270 16:14:37 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:33.529 [2024-07-22 16:14:37.626944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:33.529 [2024-07-22 16:14:37.627063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:33.529 [2024-07-22 16:14:37.627099] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:37:33.529 [2024-07-22 16:14:37.627116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:33.529 [2024-07-22 16:14:37.630582] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:33.529 [2024-07-22 16:14:37.630622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:33.529 [2024-07-22 16:14:37.630736] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:37:33.529 [2024-07-22 16:14:37.630806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:33.529 pt1 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.529 16:14:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.788 16:14:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:33.788 "name": "raid_bdev1", 00:37:33.788 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:33.788 "strip_size_kb": 0, 00:37:33.788 "state": "configuring", 00:37:33.788 "raid_level": "raid1", 00:37:33.788 "superblock": true, 00:37:33.788 "num_base_bdevs": 3, 00:37:33.788 "num_base_bdevs_discovered": 1, 00:37:33.788 "num_base_bdevs_operational": 3, 00:37:33.788 "base_bdevs_list": [ 00:37:33.788 { 00:37:33.788 "name": "pt1", 00:37:33.788 "uuid": "76882ab4-58db-54d6-8b27-e011149c0276", 00:37:33.788 "is_configured": true, 00:37:33.788 "data_offset": 2048, 00:37:33.788 "data_size": 63488 00:37:33.788 }, 00:37:33.788 { 00:37:33.788 "name": null, 00:37:33.788 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:33.788 "is_configured": false, 00:37:33.788 "data_offset": 2048, 00:37:33.788 "data_size": 63488 00:37:33.788 }, 00:37:33.788 { 00:37:33.788 "name": null, 00:37:33.788 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:33.788 "is_configured": false, 00:37:33.788 "data_offset": 2048, 00:37:33.788 "data_size": 63488 00:37:33.788 } 00:37:33.788 ] 00:37:33.788 }' 00:37:33.788 16:14:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:33.788 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:37:34.046 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:37:34.046 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:37:34.046 16:14:38 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:34.305 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:37:34.305 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:37:34.305 16:14:38 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:37:34.563 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:37:34.563 16:14:38 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:37:34.563 16:14:38 -- bdev/bdev_raid.sh@489 -- # i=2 00:37:34.563 16:14:38 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:34.822 [2024-07-22 16:14:39.067452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:34.822 [2024-07-22 16:14:39.067583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.822 [2024-07-22 16:14:39.067625] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:37:34.822 [2024-07-22 16:14:39.067643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.822 [2024-07-22 16:14:39.068320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.822 [2024-07-22 16:14:39.068355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:34.822 [2024-07-22 16:14:39.068483] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:37:34.822 [2024-07-22 16:14:39.068504] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:34.822 [2024-07-22 16:14:39.068552] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:34.822 [2024-07-22 16:14:39.068583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:37:34.822 [2024-07-22 16:14:39.068667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:34.822 pt3 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:34.822 16:14:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:35.081 16:14:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.081 16:14:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.339 16:14:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:35.339 "name": "raid_bdev1", 00:37:35.339 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:35.339 "strip_size_kb": 0, 00:37:35.339 "state": "configuring", 00:37:35.339 "raid_level": "raid1", 00:37:35.339 "superblock": true, 00:37:35.339 "num_base_bdevs": 3, 00:37:35.339 "num_base_bdevs_discovered": 1, 00:37:35.339 "num_base_bdevs_operational": 2, 00:37:35.339 "base_bdevs_list": [ 00:37:35.339 { 00:37:35.339 "name": null, 00:37:35.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.339 "is_configured": false, 00:37:35.339 "data_offset": 2048, 00:37:35.339 "data_size": 63488 00:37:35.339 }, 00:37:35.339 { 00:37:35.339 "name": null, 00:37:35.339 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:35.339 "is_configured": false, 00:37:35.339 "data_offset": 2048, 00:37:35.339 "data_size": 63488 00:37:35.339 }, 00:37:35.339 { 00:37:35.339 "name": "pt3", 00:37:35.339 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:35.339 "is_configured": true, 00:37:35.339 "data_offset": 2048, 00:37:35.339 "data_size": 63488 00:37:35.339 } 00:37:35.339 ] 00:37:35.339 }' 00:37:35.339 16:14:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:35.339 16:14:39 -- common/autotest_common.sh@10 -- # set +x 00:37:35.598 16:14:39 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:37:35.598 16:14:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:37:35.598 16:14:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:35.856 [2024-07-22 16:14:40.035739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:35.856 [2024-07-22 16:14:40.035893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:35.856 [2024-07-22 16:14:40.035939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:37:35.856 [2024-07-22 16:14:40.035959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:35.856 [2024-07-22 16:14:40.037040] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:35.856 [2024-07-22 16:14:40.037088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:35.856 [2024-07-22 16:14:40.037210] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:37:35.856 [2024-07-22 16:14:40.037477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:35.856 [2024-07-22 16:14:40.037667] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:37:35.856 [2024-07-22 16:14:40.037690] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:37:35.856 [2024-07-22 16:14:40.038067] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:37:35.856 [2024-07-22 16:14:40.038622] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:37:35.856 [2024-07-22 16:14:40.038649] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:37:35.856 [2024-07-22 16:14:40.038942] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:35.856 pt2 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.856 16:14:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.115 16:14:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:36.115 "name": "raid_bdev1", 00:37:36.115 "uuid": "1e5425c6-7673-445d-af83-e85c21f18940", 00:37:36.115 "strip_size_kb": 0, 00:37:36.115 "state": "online", 00:37:36.115 "raid_level": "raid1", 00:37:36.115 "superblock": true, 00:37:36.115 "num_base_bdevs": 3, 00:37:36.115 "num_base_bdevs_discovered": 2, 00:37:36.115 "num_base_bdevs_operational": 2, 00:37:36.115 "base_bdevs_list": [ 00:37:36.115 { 00:37:36.115 "name": null, 00:37:36.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.115 "is_configured": false, 00:37:36.115 "data_offset": 2048, 00:37:36.115 "data_size": 63488 00:37:36.115 }, 00:37:36.115 { 00:37:36.115 "name": "pt2", 00:37:36.115 "uuid": "d54b634c-7fc2-50a1-b279-27539adcbb18", 00:37:36.115 "is_configured": true, 00:37:36.115 "data_offset": 2048, 00:37:36.115 "data_size": 63488 00:37:36.115 }, 00:37:36.115 { 00:37:36.115 "name": "pt3", 00:37:36.115 "uuid": "57f018a9-91cc-5ab2-83be-74f877cbcfe3", 00:37:36.115 "is_configured": true, 00:37:36.115 "data_offset": 2048, 00:37:36.115 "data_size": 63488 00:37:36.115 } 00:37:36.115 ] 00:37:36.115 }' 00:37:36.115 16:14:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:36.115 16:14:40 -- common/autotest_common.sh@10 -- # set +x 00:37:36.681 16:14:40 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:36.681 16:14:40 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:37:36.939 [2024-07-22 16:14:40.989384] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:36.939 16:14:41 -- bdev/bdev_raid.sh@506 -- # '[' 1e5425c6-7673-445d-af83-e85c21f18940 '!=' 1e5425c6-7673-445d-af83-e85c21f18940 ']' 00:37:36.939 16:14:41 -- bdev/bdev_raid.sh@511 -- # killprocess 75136 00:37:36.939 16:14:41 -- common/autotest_common.sh@926 -- # '[' -z 75136 ']' 00:37:36.939 16:14:41 -- common/autotest_common.sh@930 -- # kill -0 75136 00:37:36.939 16:14:41 -- common/autotest_common.sh@931 -- # uname 00:37:36.939 16:14:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:36.939 16:14:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75136 00:37:36.939 killing process with pid 75136 00:37:36.939 16:14:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:36.939 16:14:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:36.939 16:14:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75136' 00:37:36.939 16:14:41 -- common/autotest_common.sh@945 -- # kill 75136 00:37:36.939 [2024-07-22 16:14:41.055238] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:36.939 16:14:41 -- common/autotest_common.sh@950 -- # wait 75136 00:37:36.939 [2024-07-22 16:14:41.055375] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:36.939 [2024-07-22 16:14:41.055505] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:36.939 [2024-07-22 16:14:41.055530] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:37:37.198 [2024-07-22 16:14:41.338606] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:38.571 ************************************ 00:37:38.571 END TEST raid_superblock_test 00:37:38.571 ************************************ 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@513 -- # return 0 00:37:38.571 00:37:38.571 real 0m19.249s 00:37:38.571 user 0m32.989s 00:37:38.571 sys 0m3.076s 00:37:38.571 16:14:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.571 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:37:38.571 16:14:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:37:38.571 16:14:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:38.571 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:37:38.571 ************************************ 00:37:38.571 START TEST raid_state_function_test 00:37:38.571 ************************************ 00:37:38.571 16:14:42 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=75713 00:37:38.571 Process raid pid: 75713 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 75713' 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 75713 /var/tmp/spdk-raid.sock 00:37:38.571 16:14:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:38.571 16:14:42 -- common/autotest_common.sh@819 -- # '[' -z 75713 ']' 00:37:38.571 16:14:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:38.571 16:14:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:38.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:38.571 16:14:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:38.571 16:14:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:38.571 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:37:38.571 [2024-07-22 16:14:42.835190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:37:38.572 [2024-07-22 16:14:42.835346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:38.828 [2024-07-22 16:14:43.005076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.085 [2024-07-22 16:14:43.307384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.342 [2024-07-22 16:14:43.532370] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:39.599 16:14:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:39.599 16:14:43 -- common/autotest_common.sh@852 -- # return 0 00:37:39.599 16:14:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:39.857 [2024-07-22 16:14:43.968762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:39.857 [2024-07-22 16:14:43.968851] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:39.857 [2024-07-22 16:14:43.968867] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:39.857 [2024-07-22 16:14:43.968883] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:39.857 [2024-07-22 16:14:43.968897] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:39.857 [2024-07-22 16:14:43.968912] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:39.857 [2024-07-22 16:14:43.968922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:39.857 [2024-07-22 16:14:43.968936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:39.857 16:14:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:40.115 16:14:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:40.115 "name": "Existed_Raid", 00:37:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.115 "strip_size_kb": 64, 00:37:40.115 "state": "configuring", 00:37:40.115 "raid_level": "raid0", 00:37:40.115 "superblock": false, 00:37:40.115 "num_base_bdevs": 4, 00:37:40.115 "num_base_bdevs_discovered": 0, 00:37:40.115 "num_base_bdevs_operational": 4, 00:37:40.115 "base_bdevs_list": [ 00:37:40.115 { 00:37:40.115 "name": "BaseBdev1", 00:37:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.115 "is_configured": false, 00:37:40.115 "data_offset": 0, 00:37:40.115 "data_size": 0 00:37:40.115 }, 00:37:40.115 { 00:37:40.115 "name": "BaseBdev2", 00:37:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.115 "is_configured": false, 00:37:40.115 "data_offset": 0, 00:37:40.115 "data_size": 0 00:37:40.115 }, 00:37:40.115 { 00:37:40.115 "name": "BaseBdev3", 00:37:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.115 "is_configured": false, 00:37:40.115 "data_offset": 0, 00:37:40.115 "data_size": 0 00:37:40.115 }, 00:37:40.115 { 00:37:40.115 "name": "BaseBdev4", 00:37:40.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.115 "is_configured": false, 00:37:40.115 "data_offset": 0, 00:37:40.115 "data_size": 0 00:37:40.115 } 00:37:40.115 ] 00:37:40.115 }' 00:37:40.115 16:14:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:40.115 16:14:44 -- common/autotest_common.sh@10 -- # set +x 00:37:40.372 16:14:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:40.630 [2024-07-22 16:14:44.788919] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:40.630 [2024-07-22 16:14:44.789017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:37:40.630 16:14:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:40.888 [2024-07-22 16:14:45.037047] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:40.888 [2024-07-22 16:14:45.037152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:40.888 [2024-07-22 16:14:45.037168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:40.888 [2024-07-22 16:14:45.037184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:40.888 [2024-07-22 16:14:45.037194] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:40.888 [2024-07-22 16:14:45.037208] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:40.888 [2024-07-22 16:14:45.037218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:40.888 [2024-07-22 16:14:45.037247] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:40.888 16:14:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:41.182 [2024-07-22 16:14:45.382340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:41.182 BaseBdev1 00:37:41.182 16:14:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:37:41.182 16:14:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:37:41.182 16:14:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:41.182 16:14:45 -- common/autotest_common.sh@889 -- # local i 00:37:41.182 16:14:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:41.182 16:14:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:41.182 16:14:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:41.445 16:14:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:42.012 [ 00:37:42.012 { 00:37:42.012 "name": "BaseBdev1", 00:37:42.012 "aliases": [ 00:37:42.012 "e96bed2b-3a04-4cdd-ace6-61e285897f04" 00:37:42.012 ], 00:37:42.012 "product_name": "Malloc disk", 00:37:42.012 "block_size": 512, 00:37:42.012 "num_blocks": 65536, 00:37:42.012 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:42.012 "assigned_rate_limits": { 00:37:42.012 "rw_ios_per_sec": 0, 00:37:42.012 "rw_mbytes_per_sec": 0, 00:37:42.012 "r_mbytes_per_sec": 0, 00:37:42.012 "w_mbytes_per_sec": 0 00:37:42.012 }, 00:37:42.012 "claimed": true, 00:37:42.012 "claim_type": "exclusive_write", 00:37:42.012 "zoned": false, 00:37:42.012 "supported_io_types": { 00:37:42.012 "read": true, 00:37:42.012 "write": true, 00:37:42.012 "unmap": true, 00:37:42.012 "write_zeroes": true, 00:37:42.012 "flush": true, 00:37:42.012 "reset": true, 00:37:42.012 "compare": false, 00:37:42.012 "compare_and_write": false, 00:37:42.012 "abort": true, 00:37:42.012 "nvme_admin": false, 00:37:42.012 "nvme_io": false 00:37:42.012 }, 00:37:42.012 "memory_domains": [ 00:37:42.012 { 00:37:42.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:42.012 "dma_device_type": 2 00:37:42.012 } 00:37:42.012 ], 00:37:42.012 "driver_specific": {} 00:37:42.012 } 00:37:42.012 ] 00:37:42.012 16:14:46 -- common/autotest_common.sh@895 -- # return 0 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:42.012 "name": "Existed_Raid", 00:37:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.012 "strip_size_kb": 64, 00:37:42.012 "state": "configuring", 00:37:42.012 "raid_level": "raid0", 00:37:42.012 "superblock": false, 00:37:42.012 "num_base_bdevs": 4, 00:37:42.012 "num_base_bdevs_discovered": 1, 00:37:42.012 "num_base_bdevs_operational": 4, 00:37:42.012 "base_bdevs_list": [ 00:37:42.012 { 00:37:42.012 "name": "BaseBdev1", 00:37:42.012 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:42.012 "is_configured": true, 00:37:42.012 "data_offset": 0, 00:37:42.012 "data_size": 65536 00:37:42.012 }, 00:37:42.012 { 00:37:42.012 "name": "BaseBdev2", 00:37:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.012 "is_configured": false, 00:37:42.012 "data_offset": 0, 00:37:42.012 "data_size": 0 00:37:42.012 }, 00:37:42.012 { 00:37:42.012 "name": "BaseBdev3", 00:37:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.012 "is_configured": false, 00:37:42.012 "data_offset": 0, 00:37:42.012 "data_size": 0 00:37:42.012 }, 00:37:42.012 { 00:37:42.012 "name": "BaseBdev4", 00:37:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.012 "is_configured": false, 00:37:42.012 "data_offset": 0, 00:37:42.012 "data_size": 0 00:37:42.012 } 00:37:42.012 ] 00:37:42.012 }' 00:37:42.012 16:14:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:42.012 16:14:46 -- common/autotest_common.sh@10 -- # set +x 00:37:42.578 16:14:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:42.836 [2024-07-22 16:14:46.854790] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:42.836 [2024-07-22 16:14:46.854872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:37:42.836 16:14:46 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:37:42.836 16:14:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:43.093 [2024-07-22 16:14:47.130903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:43.093 [2024-07-22 16:14:47.133220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:43.093 [2024-07-22 16:14:47.133272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:43.093 [2024-07-22 16:14:47.133286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:43.093 [2024-07-22 16:14:47.133301] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:43.093 [2024-07-22 16:14:47.133311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:43.093 [2024-07-22 16:14:47.133328] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:43.093 16:14:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:43.350 16:14:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:43.350 "name": "Existed_Raid", 00:37:43.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.350 "strip_size_kb": 64, 00:37:43.350 "state": "configuring", 00:37:43.350 "raid_level": "raid0", 00:37:43.350 "superblock": false, 00:37:43.350 "num_base_bdevs": 4, 00:37:43.350 "num_base_bdevs_discovered": 1, 00:37:43.350 "num_base_bdevs_operational": 4, 00:37:43.350 "base_bdevs_list": [ 00:37:43.350 { 00:37:43.350 "name": "BaseBdev1", 00:37:43.350 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:43.350 "is_configured": true, 00:37:43.350 "data_offset": 0, 00:37:43.350 "data_size": 65536 00:37:43.350 }, 00:37:43.350 { 00:37:43.350 "name": "BaseBdev2", 00:37:43.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.350 "is_configured": false, 00:37:43.350 "data_offset": 0, 00:37:43.350 "data_size": 0 00:37:43.350 }, 00:37:43.350 { 00:37:43.350 "name": "BaseBdev3", 00:37:43.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.350 "is_configured": false, 00:37:43.350 "data_offset": 0, 00:37:43.350 "data_size": 0 00:37:43.350 }, 00:37:43.350 { 00:37:43.350 "name": "BaseBdev4", 00:37:43.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.350 "is_configured": false, 00:37:43.350 "data_offset": 0, 00:37:43.350 "data_size": 0 00:37:43.350 } 00:37:43.350 ] 00:37:43.350 }' 00:37:43.351 16:14:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:43.351 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:37:43.616 16:14:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:43.874 [2024-07-22 16:14:48.047966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:43.874 BaseBdev2 00:37:43.874 16:14:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:37:43.874 16:14:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:37:43.874 16:14:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:43.874 16:14:48 -- common/autotest_common.sh@889 -- # local i 00:37:43.874 16:14:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:43.874 16:14:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:43.874 16:14:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:44.132 16:14:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:44.391 [ 00:37:44.391 { 00:37:44.391 "name": "BaseBdev2", 00:37:44.391 "aliases": [ 00:37:44.391 "b0aeef11-493d-4537-8956-96055e50a756" 00:37:44.391 ], 00:37:44.391 "product_name": "Malloc disk", 00:37:44.391 "block_size": 512, 00:37:44.391 "num_blocks": 65536, 00:37:44.391 "uuid": "b0aeef11-493d-4537-8956-96055e50a756", 00:37:44.391 "assigned_rate_limits": { 00:37:44.391 "rw_ios_per_sec": 0, 00:37:44.391 "rw_mbytes_per_sec": 0, 00:37:44.391 "r_mbytes_per_sec": 0, 00:37:44.391 "w_mbytes_per_sec": 0 00:37:44.391 }, 00:37:44.391 "claimed": true, 00:37:44.391 "claim_type": "exclusive_write", 00:37:44.391 "zoned": false, 00:37:44.391 "supported_io_types": { 00:37:44.392 "read": true, 00:37:44.392 "write": true, 00:37:44.392 "unmap": true, 00:37:44.392 "write_zeroes": true, 00:37:44.392 "flush": true, 00:37:44.392 "reset": true, 00:37:44.392 "compare": false, 00:37:44.392 "compare_and_write": false, 00:37:44.392 "abort": true, 00:37:44.392 "nvme_admin": false, 00:37:44.392 "nvme_io": false 00:37:44.392 }, 00:37:44.392 "memory_domains": [ 00:37:44.392 { 00:37:44.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:44.392 "dma_device_type": 2 00:37:44.392 } 00:37:44.392 ], 00:37:44.392 "driver_specific": {} 00:37:44.392 } 00:37:44.392 ] 00:37:44.392 16:14:48 -- common/autotest_common.sh@895 -- # return 0 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:44.392 16:14:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.662 16:14:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:44.662 "name": "Existed_Raid", 00:37:44.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.662 "strip_size_kb": 64, 00:37:44.662 "state": "configuring", 00:37:44.662 "raid_level": "raid0", 00:37:44.662 "superblock": false, 00:37:44.662 "num_base_bdevs": 4, 00:37:44.662 "num_base_bdevs_discovered": 2, 00:37:44.662 "num_base_bdevs_operational": 4, 00:37:44.662 "base_bdevs_list": [ 00:37:44.662 { 00:37:44.662 "name": "BaseBdev1", 00:37:44.662 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:44.662 "is_configured": true, 00:37:44.662 "data_offset": 0, 00:37:44.662 "data_size": 65536 00:37:44.662 }, 00:37:44.662 { 00:37:44.662 "name": "BaseBdev2", 00:37:44.662 "uuid": "b0aeef11-493d-4537-8956-96055e50a756", 00:37:44.662 "is_configured": true, 00:37:44.662 "data_offset": 0, 00:37:44.662 "data_size": 65536 00:37:44.663 }, 00:37:44.663 { 00:37:44.663 "name": "BaseBdev3", 00:37:44.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.663 "is_configured": false, 00:37:44.663 "data_offset": 0, 00:37:44.663 "data_size": 0 00:37:44.663 }, 00:37:44.663 { 00:37:44.663 "name": "BaseBdev4", 00:37:44.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.663 "is_configured": false, 00:37:44.663 "data_offset": 0, 00:37:44.663 "data_size": 0 00:37:44.663 } 00:37:44.663 ] 00:37:44.663 }' 00:37:44.663 16:14:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:44.663 16:14:48 -- common/autotest_common.sh@10 -- # set +x 00:37:45.228 16:14:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:37:45.485 [2024-07-22 16:14:49.521597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:45.485 BaseBdev3 00:37:45.485 16:14:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:37:45.485 16:14:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:37:45.485 16:14:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:45.485 16:14:49 -- common/autotest_common.sh@889 -- # local i 00:37:45.485 16:14:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:45.485 16:14:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:45.485 16:14:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:45.743 16:14:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:46.001 [ 00:37:46.001 { 00:37:46.001 "name": "BaseBdev3", 00:37:46.001 "aliases": [ 00:37:46.001 "7fca3233-a403-4e39-836c-6c8056bc0980" 00:37:46.001 ], 00:37:46.001 "product_name": "Malloc disk", 00:37:46.001 "block_size": 512, 00:37:46.001 "num_blocks": 65536, 00:37:46.001 "uuid": "7fca3233-a403-4e39-836c-6c8056bc0980", 00:37:46.001 "assigned_rate_limits": { 00:37:46.001 "rw_ios_per_sec": 0, 00:37:46.001 "rw_mbytes_per_sec": 0, 00:37:46.001 "r_mbytes_per_sec": 0, 00:37:46.001 "w_mbytes_per_sec": 0 00:37:46.001 }, 00:37:46.001 "claimed": true, 00:37:46.001 "claim_type": "exclusive_write", 00:37:46.001 "zoned": false, 00:37:46.001 "supported_io_types": { 00:37:46.001 "read": true, 00:37:46.001 "write": true, 00:37:46.001 "unmap": true, 00:37:46.001 "write_zeroes": true, 00:37:46.001 "flush": true, 00:37:46.001 "reset": true, 00:37:46.001 "compare": false, 00:37:46.001 "compare_and_write": false, 00:37:46.001 "abort": true, 00:37:46.001 "nvme_admin": false, 00:37:46.001 "nvme_io": false 00:37:46.001 }, 00:37:46.001 "memory_domains": [ 00:37:46.001 { 00:37:46.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:46.001 "dma_device_type": 2 00:37:46.001 } 00:37:46.001 ], 00:37:46.001 "driver_specific": {} 00:37:46.001 } 00:37:46.001 ] 00:37:46.001 16:14:50 -- common/autotest_common.sh@895 -- # return 0 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.001 16:14:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:46.258 16:14:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:46.258 "name": "Existed_Raid", 00:37:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.258 "strip_size_kb": 64, 00:37:46.258 "state": "configuring", 00:37:46.258 "raid_level": "raid0", 00:37:46.258 "superblock": false, 00:37:46.258 "num_base_bdevs": 4, 00:37:46.258 "num_base_bdevs_discovered": 3, 00:37:46.258 "num_base_bdevs_operational": 4, 00:37:46.258 "base_bdevs_list": [ 00:37:46.258 { 00:37:46.258 "name": "BaseBdev1", 00:37:46.258 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:46.258 "is_configured": true, 00:37:46.258 "data_offset": 0, 00:37:46.258 "data_size": 65536 00:37:46.258 }, 00:37:46.258 { 00:37:46.258 "name": "BaseBdev2", 00:37:46.258 "uuid": "b0aeef11-493d-4537-8956-96055e50a756", 00:37:46.258 "is_configured": true, 00:37:46.258 "data_offset": 0, 00:37:46.258 "data_size": 65536 00:37:46.258 }, 00:37:46.258 { 00:37:46.258 "name": "BaseBdev3", 00:37:46.258 "uuid": "7fca3233-a403-4e39-836c-6c8056bc0980", 00:37:46.258 "is_configured": true, 00:37:46.258 "data_offset": 0, 00:37:46.258 "data_size": 65536 00:37:46.258 }, 00:37:46.258 { 00:37:46.258 "name": "BaseBdev4", 00:37:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.258 "is_configured": false, 00:37:46.258 "data_offset": 0, 00:37:46.258 "data_size": 0 00:37:46.258 } 00:37:46.258 ] 00:37:46.258 }' 00:37:46.258 16:14:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:46.258 16:14:50 -- common/autotest_common.sh@10 -- # set +x 00:37:46.515 16:14:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:37:46.773 [2024-07-22 16:14:50.846869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:46.773 [2024-07-22 16:14:50.846930] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:37:46.773 [2024-07-22 16:14:50.846955] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:37:46.773 [2024-07-22 16:14:50.847117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:37:46.773 [2024-07-22 16:14:50.847531] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:37:46.773 [2024-07-22 16:14:50.847565] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:37:46.773 [2024-07-22 16:14:50.847855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:46.773 BaseBdev4 00:37:46.773 16:14:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:37:46.773 16:14:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:37:46.773 16:14:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:46.773 16:14:50 -- common/autotest_common.sh@889 -- # local i 00:37:46.773 16:14:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:46.773 16:14:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:46.773 16:14:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:47.030 16:14:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:47.288 [ 00:37:47.288 { 00:37:47.288 "name": "BaseBdev4", 00:37:47.288 "aliases": [ 00:37:47.288 "f79e581f-4739-4dc2-b370-99df427a6ffc" 00:37:47.288 ], 00:37:47.288 "product_name": "Malloc disk", 00:37:47.288 "block_size": 512, 00:37:47.288 "num_blocks": 65536, 00:37:47.288 "uuid": "f79e581f-4739-4dc2-b370-99df427a6ffc", 00:37:47.288 "assigned_rate_limits": { 00:37:47.288 "rw_ios_per_sec": 0, 00:37:47.288 "rw_mbytes_per_sec": 0, 00:37:47.288 "r_mbytes_per_sec": 0, 00:37:47.288 "w_mbytes_per_sec": 0 00:37:47.288 }, 00:37:47.288 "claimed": true, 00:37:47.288 "claim_type": "exclusive_write", 00:37:47.288 "zoned": false, 00:37:47.288 "supported_io_types": { 00:37:47.288 "read": true, 00:37:47.288 "write": true, 00:37:47.288 "unmap": true, 00:37:47.288 "write_zeroes": true, 00:37:47.288 "flush": true, 00:37:47.288 "reset": true, 00:37:47.288 "compare": false, 00:37:47.288 "compare_and_write": false, 00:37:47.288 "abort": true, 00:37:47.288 "nvme_admin": false, 00:37:47.288 "nvme_io": false 00:37:47.288 }, 00:37:47.288 "memory_domains": [ 00:37:47.288 { 00:37:47.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:47.288 "dma_device_type": 2 00:37:47.288 } 00:37:47.288 ], 00:37:47.288 "driver_specific": {} 00:37:47.288 } 00:37:47.288 ] 00:37:47.288 16:14:51 -- common/autotest_common.sh@895 -- # return 0 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:47.288 16:14:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.547 16:14:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:47.547 "name": "Existed_Raid", 00:37:47.547 "uuid": "fd8fe9ae-4c11-436f-9bd2-b8ae16463a40", 00:37:47.547 "strip_size_kb": 64, 00:37:47.547 "state": "online", 00:37:47.547 "raid_level": "raid0", 00:37:47.547 "superblock": false, 00:37:47.547 "num_base_bdevs": 4, 00:37:47.547 "num_base_bdevs_discovered": 4, 00:37:47.547 "num_base_bdevs_operational": 4, 00:37:47.547 "base_bdevs_list": [ 00:37:47.547 { 00:37:47.547 "name": "BaseBdev1", 00:37:47.547 "uuid": "e96bed2b-3a04-4cdd-ace6-61e285897f04", 00:37:47.547 "is_configured": true, 00:37:47.547 "data_offset": 0, 00:37:47.547 "data_size": 65536 00:37:47.547 }, 00:37:47.547 { 00:37:47.547 "name": "BaseBdev2", 00:37:47.547 "uuid": "b0aeef11-493d-4537-8956-96055e50a756", 00:37:47.547 "is_configured": true, 00:37:47.547 "data_offset": 0, 00:37:47.547 "data_size": 65536 00:37:47.547 }, 00:37:47.547 { 00:37:47.547 "name": "BaseBdev3", 00:37:47.547 "uuid": "7fca3233-a403-4e39-836c-6c8056bc0980", 00:37:47.547 "is_configured": true, 00:37:47.547 "data_offset": 0, 00:37:47.547 "data_size": 65536 00:37:47.547 }, 00:37:47.547 { 00:37:47.547 "name": "BaseBdev4", 00:37:47.547 "uuid": "f79e581f-4739-4dc2-b370-99df427a6ffc", 00:37:47.547 "is_configured": true, 00:37:47.547 "data_offset": 0, 00:37:47.547 "data_size": 65536 00:37:47.547 } 00:37:47.547 ] 00:37:47.547 }' 00:37:47.547 16:14:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:47.547 16:14:51 -- common/autotest_common.sh@10 -- # set +x 00:37:47.805 16:14:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:48.063 [2024-07-22 16:14:52.207501] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:48.063 [2024-07-22 16:14:52.207569] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:48.063 [2024-07-22 16:14:52.207646] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.063 16:14:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:48.322 16:14:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:48.322 "name": "Existed_Raid", 00:37:48.322 "uuid": "fd8fe9ae-4c11-436f-9bd2-b8ae16463a40", 00:37:48.322 "strip_size_kb": 64, 00:37:48.322 "state": "offline", 00:37:48.322 "raid_level": "raid0", 00:37:48.322 "superblock": false, 00:37:48.322 "num_base_bdevs": 4, 00:37:48.322 "num_base_bdevs_discovered": 3, 00:37:48.322 "num_base_bdevs_operational": 3, 00:37:48.322 "base_bdevs_list": [ 00:37:48.322 { 00:37:48.322 "name": null, 00:37:48.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:48.322 "is_configured": false, 00:37:48.322 "data_offset": 0, 00:37:48.322 "data_size": 65536 00:37:48.322 }, 00:37:48.322 { 00:37:48.322 "name": "BaseBdev2", 00:37:48.322 "uuid": "b0aeef11-493d-4537-8956-96055e50a756", 00:37:48.322 "is_configured": true, 00:37:48.322 "data_offset": 0, 00:37:48.322 "data_size": 65536 00:37:48.322 }, 00:37:48.322 { 00:37:48.322 "name": "BaseBdev3", 00:37:48.322 "uuid": "7fca3233-a403-4e39-836c-6c8056bc0980", 00:37:48.322 "is_configured": true, 00:37:48.322 "data_offset": 0, 00:37:48.322 "data_size": 65536 00:37:48.322 }, 00:37:48.322 { 00:37:48.322 "name": "BaseBdev4", 00:37:48.322 "uuid": "f79e581f-4739-4dc2-b370-99df427a6ffc", 00:37:48.322 "is_configured": true, 00:37:48.322 "data_offset": 0, 00:37:48.322 "data_size": 65536 00:37:48.322 } 00:37:48.322 ] 00:37:48.322 }' 00:37:48.322 16:14:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:48.322 16:14:52 -- common/autotest_common.sh@10 -- # set +x 00:37:48.888 16:14:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:37:48.888 16:14:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:48.888 16:14:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:48.888 16:14:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:48.888 16:14:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:48.888 16:14:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:48.888 16:14:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:49.147 [2024-07-22 16:14:53.405566] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:49.407 16:14:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:49.407 16:14:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:49.407 16:14:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.407 16:14:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:49.666 16:14:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:49.666 16:14:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:49.666 16:14:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:37:49.925 [2024-07-22 16:14:53.972187] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:49.925 16:14:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:49.925 16:14:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:49.925 16:14:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.925 16:14:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:37:50.184 16:14:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:37:50.184 16:14:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:50.184 16:14:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:37:50.443 [2024-07-22 16:14:54.639627] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:50.443 [2024-07-22 16:14:54.639751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:37:50.701 16:14:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:37:50.701 16:14:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:37:50.701 16:14:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.701 16:14:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:37:50.959 16:14:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:37:50.959 16:14:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:37:50.959 16:14:54 -- bdev/bdev_raid.sh@287 -- # killprocess 75713 00:37:50.959 16:14:54 -- common/autotest_common.sh@926 -- # '[' -z 75713 ']' 00:37:50.959 16:14:54 -- common/autotest_common.sh@930 -- # kill -0 75713 00:37:50.959 16:14:54 -- common/autotest_common.sh@931 -- # uname 00:37:50.959 16:14:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:37:50.959 16:14:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75713 00:37:50.959 killing process with pid 75713 00:37:50.959 16:14:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:37:50.959 16:14:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:37:50.959 16:14:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75713' 00:37:50.959 16:14:55 -- common/autotest_common.sh@945 -- # kill 75713 00:37:50.959 [2024-07-22 16:14:55.022244] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:50.959 16:14:55 -- common/autotest_common.sh@950 -- # wait 75713 00:37:50.959 [2024-07-22 16:14:55.022415] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:37:52.336 00:37:52.336 real 0m13.530s 00:37:52.336 user 0m22.364s 00:37:52.336 sys 0m2.280s 00:37:52.336 16:14:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:52.336 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:37:52.336 ************************************ 00:37:52.336 END TEST raid_state_function_test 00:37:52.336 ************************************ 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:37:52.336 16:14:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:37:52.336 16:14:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:37:52.336 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:37:52.336 ************************************ 00:37:52.336 START TEST raid_state_function_test_sb 00:37:52.336 ************************************ 00:37:52.336 16:14:56 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:37:52.336 16:14:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=76118 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:52.337 Process raid pid: 76118 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76118' 00:37:52.337 16:14:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76118 /var/tmp/spdk-raid.sock 00:37:52.337 16:14:56 -- common/autotest_common.sh@819 -- # '[' -z 76118 ']' 00:37:52.337 16:14:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:52.337 16:14:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:37:52.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:52.337 16:14:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:52.337 16:14:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:37:52.337 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:37:52.337 [2024-07-22 16:14:56.425842] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:37:52.337 [2024-07-22 16:14:56.426042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.337 [2024-07-22 16:14:56.599093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.903 [2024-07-22 16:14:56.915036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.903 [2024-07-22 16:14:57.151679] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:53.161 16:14:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:37:53.161 16:14:57 -- common/autotest_common.sh@852 -- # return 0 00:37:53.161 16:14:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:53.418 [2024-07-22 16:14:57.548298] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:53.419 [2024-07-22 16:14:57.548402] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:53.419 [2024-07-22 16:14:57.548418] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:53.419 [2024-07-22 16:14:57.548435] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:53.419 [2024-07-22 16:14:57.548444] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:53.419 [2024-07-22 16:14:57.548460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:53.419 [2024-07-22 16:14:57.548469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:53.419 [2024-07-22 16:14:57.548484] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:53.419 16:14:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:53.677 16:14:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:53.677 "name": "Existed_Raid", 00:37:53.677 "uuid": "1047efb2-a67b-49ac-8ef5-490b98333ec3", 00:37:53.677 "strip_size_kb": 64, 00:37:53.677 "state": "configuring", 00:37:53.677 "raid_level": "raid0", 00:37:53.677 "superblock": true, 00:37:53.677 "num_base_bdevs": 4, 00:37:53.677 "num_base_bdevs_discovered": 0, 00:37:53.677 "num_base_bdevs_operational": 4, 00:37:53.677 "base_bdevs_list": [ 00:37:53.677 { 00:37:53.677 "name": "BaseBdev1", 00:37:53.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.677 "is_configured": false, 00:37:53.677 "data_offset": 0, 00:37:53.677 "data_size": 0 00:37:53.677 }, 00:37:53.677 { 00:37:53.677 "name": "BaseBdev2", 00:37:53.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.677 "is_configured": false, 00:37:53.677 "data_offset": 0, 00:37:53.677 "data_size": 0 00:37:53.677 }, 00:37:53.677 { 00:37:53.677 "name": "BaseBdev3", 00:37:53.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.677 "is_configured": false, 00:37:53.677 "data_offset": 0, 00:37:53.677 "data_size": 0 00:37:53.677 }, 00:37:53.677 { 00:37:53.677 "name": "BaseBdev4", 00:37:53.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:53.677 "is_configured": false, 00:37:53.677 "data_offset": 0, 00:37:53.677 "data_size": 0 00:37:53.677 } 00:37:53.677 ] 00:37:53.677 }' 00:37:53.677 16:14:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:53.677 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:37:54.242 16:14:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:54.242 [2024-07-22 16:14:58.436370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:54.242 [2024-07-22 16:14:58.436468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:37:54.242 16:14:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:54.500 [2024-07-22 16:14:58.684505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:54.500 [2024-07-22 16:14:58.684603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:54.500 [2024-07-22 16:14:58.684618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:54.500 [2024-07-22 16:14:58.684635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:54.500 [2024-07-22 16:14:58.684644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:54.500 [2024-07-22 16:14:58.684660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:54.500 [2024-07-22 16:14:58.684669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:54.500 [2024-07-22 16:14:58.684683] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:54.500 16:14:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:54.758 [2024-07-22 16:14:59.000208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:54.758 BaseBdev1 00:37:54.758 16:14:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:37:54.758 16:14:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:37:54.758 16:14:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:54.758 16:14:59 -- common/autotest_common.sh@889 -- # local i 00:37:54.758 16:14:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:54.758 16:14:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:54.758 16:14:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:55.016 16:14:59 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:55.274 [ 00:37:55.274 { 00:37:55.274 "name": "BaseBdev1", 00:37:55.274 "aliases": [ 00:37:55.274 "babd0b13-0b4c-45f2-9482-b699076ec244" 00:37:55.274 ], 00:37:55.274 "product_name": "Malloc disk", 00:37:55.274 "block_size": 512, 00:37:55.274 "num_blocks": 65536, 00:37:55.274 "uuid": "babd0b13-0b4c-45f2-9482-b699076ec244", 00:37:55.274 "assigned_rate_limits": { 00:37:55.274 "rw_ios_per_sec": 0, 00:37:55.274 "rw_mbytes_per_sec": 0, 00:37:55.274 "r_mbytes_per_sec": 0, 00:37:55.274 "w_mbytes_per_sec": 0 00:37:55.274 }, 00:37:55.274 "claimed": true, 00:37:55.274 "claim_type": "exclusive_write", 00:37:55.274 "zoned": false, 00:37:55.274 "supported_io_types": { 00:37:55.274 "read": true, 00:37:55.274 "write": true, 00:37:55.274 "unmap": true, 00:37:55.274 "write_zeroes": true, 00:37:55.274 "flush": true, 00:37:55.274 "reset": true, 00:37:55.274 "compare": false, 00:37:55.274 "compare_and_write": false, 00:37:55.274 "abort": true, 00:37:55.274 "nvme_admin": false, 00:37:55.274 "nvme_io": false 00:37:55.274 }, 00:37:55.274 "memory_domains": [ 00:37:55.274 { 00:37:55.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:55.274 "dma_device_type": 2 00:37:55.274 } 00:37:55.274 ], 00:37:55.274 "driver_specific": {} 00:37:55.274 } 00:37:55.274 ] 00:37:55.274 16:14:59 -- common/autotest_common.sh@895 -- # return 0 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:55.274 16:14:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:55.531 16:14:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:55.531 "name": "Existed_Raid", 00:37:55.531 "uuid": "a85f89f2-0ca6-4498-b634-85834bbd11ec", 00:37:55.531 "strip_size_kb": 64, 00:37:55.531 "state": "configuring", 00:37:55.531 "raid_level": "raid0", 00:37:55.531 "superblock": true, 00:37:55.531 "num_base_bdevs": 4, 00:37:55.531 "num_base_bdevs_discovered": 1, 00:37:55.531 "num_base_bdevs_operational": 4, 00:37:55.531 "base_bdevs_list": [ 00:37:55.531 { 00:37:55.531 "name": "BaseBdev1", 00:37:55.531 "uuid": "babd0b13-0b4c-45f2-9482-b699076ec244", 00:37:55.531 "is_configured": true, 00:37:55.531 "data_offset": 2048, 00:37:55.531 "data_size": 63488 00:37:55.531 }, 00:37:55.531 { 00:37:55.531 "name": "BaseBdev2", 00:37:55.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:55.531 "is_configured": false, 00:37:55.532 "data_offset": 0, 00:37:55.532 "data_size": 0 00:37:55.532 }, 00:37:55.532 { 00:37:55.532 "name": "BaseBdev3", 00:37:55.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:55.532 "is_configured": false, 00:37:55.532 "data_offset": 0, 00:37:55.532 "data_size": 0 00:37:55.532 }, 00:37:55.532 { 00:37:55.532 "name": "BaseBdev4", 00:37:55.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:55.532 "is_configured": false, 00:37:55.532 "data_offset": 0, 00:37:55.532 "data_size": 0 00:37:55.532 } 00:37:55.532 ] 00:37:55.532 }' 00:37:55.532 16:14:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:55.532 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:37:56.098 16:15:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:56.356 [2024-07-22 16:15:00.392700] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:56.356 [2024-07-22 16:15:00.392805] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:37:56.356 16:15:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:37:56.356 16:15:00 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:56.630 16:15:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:37:56.889 BaseBdev1 00:37:56.889 16:15:01 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:37:56.889 16:15:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:37:56.889 16:15:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:56.889 16:15:01 -- common/autotest_common.sh@889 -- # local i 00:37:56.889 16:15:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:56.889 16:15:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:56.889 16:15:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:57.147 16:15:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:57.405 [ 00:37:57.405 { 00:37:57.405 "name": "BaseBdev1", 00:37:57.405 "aliases": [ 00:37:57.405 "5747c3d6-7e59-468d-b5b8-145c65a27902" 00:37:57.405 ], 00:37:57.405 "product_name": "Malloc disk", 00:37:57.405 "block_size": 512, 00:37:57.405 "num_blocks": 65536, 00:37:57.405 "uuid": "5747c3d6-7e59-468d-b5b8-145c65a27902", 00:37:57.405 "assigned_rate_limits": { 00:37:57.405 "rw_ios_per_sec": 0, 00:37:57.405 "rw_mbytes_per_sec": 0, 00:37:57.405 "r_mbytes_per_sec": 0, 00:37:57.405 "w_mbytes_per_sec": 0 00:37:57.405 }, 00:37:57.405 "claimed": false, 00:37:57.405 "zoned": false, 00:37:57.405 "supported_io_types": { 00:37:57.405 "read": true, 00:37:57.405 "write": true, 00:37:57.405 "unmap": true, 00:37:57.405 "write_zeroes": true, 00:37:57.405 "flush": true, 00:37:57.405 "reset": true, 00:37:57.405 "compare": false, 00:37:57.405 "compare_and_write": false, 00:37:57.405 "abort": true, 00:37:57.405 "nvme_admin": false, 00:37:57.405 "nvme_io": false 00:37:57.405 }, 00:37:57.405 "memory_domains": [ 00:37:57.405 { 00:37:57.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:57.406 "dma_device_type": 2 00:37:57.406 } 00:37:57.406 ], 00:37:57.406 "driver_specific": {} 00:37:57.406 } 00:37:57.406 ] 00:37:57.406 16:15:01 -- common/autotest_common.sh@895 -- # return 0 00:37:57.406 16:15:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:37:57.664 [2024-07-22 16:15:01.777072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:57.664 [2024-07-22 16:15:01.780263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:57.664 [2024-07-22 16:15:01.780355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:57.664 [2024-07-22 16:15:01.780374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:57.664 [2024-07-22 16:15:01.780395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:57.664 [2024-07-22 16:15:01.780408] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:57.664 [2024-07-22 16:15:01.780430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.664 16:15:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:57.923 16:15:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:57.923 "name": "Existed_Raid", 00:37:57.923 "uuid": "36005814-7f92-455a-b6fb-9ab5f7258a63", 00:37:57.923 "strip_size_kb": 64, 00:37:57.923 "state": "configuring", 00:37:57.923 "raid_level": "raid0", 00:37:57.923 "superblock": true, 00:37:57.923 "num_base_bdevs": 4, 00:37:57.923 "num_base_bdevs_discovered": 1, 00:37:57.923 "num_base_bdevs_operational": 4, 00:37:57.923 "base_bdevs_list": [ 00:37:57.923 { 00:37:57.923 "name": "BaseBdev1", 00:37:57.923 "uuid": "5747c3d6-7e59-468d-b5b8-145c65a27902", 00:37:57.923 "is_configured": true, 00:37:57.923 "data_offset": 2048, 00:37:57.923 "data_size": 63488 00:37:57.923 }, 00:37:57.923 { 00:37:57.923 "name": "BaseBdev2", 00:37:57.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.923 "is_configured": false, 00:37:57.923 "data_offset": 0, 00:37:57.923 "data_size": 0 00:37:57.923 }, 00:37:57.923 { 00:37:57.923 "name": "BaseBdev3", 00:37:57.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.923 "is_configured": false, 00:37:57.923 "data_offset": 0, 00:37:57.923 "data_size": 0 00:37:57.923 }, 00:37:57.923 { 00:37:57.923 "name": "BaseBdev4", 00:37:57.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:57.923 "is_configured": false, 00:37:57.923 "data_offset": 0, 00:37:57.923 "data_size": 0 00:37:57.923 } 00:37:57.923 ] 00:37:57.923 }' 00:37:57.923 16:15:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:57.923 16:15:02 -- common/autotest_common.sh@10 -- # set +x 00:37:58.181 16:15:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:37:58.439 [2024-07-22 16:15:02.677566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:58.439 BaseBdev2 00:37:58.439 16:15:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:37:58.439 16:15:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:37:58.439 16:15:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:37:58.439 16:15:02 -- common/autotest_common.sh@889 -- # local i 00:37:58.439 16:15:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:37:58.439 16:15:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:37:58.439 16:15:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:58.697 16:15:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:58.956 [ 00:37:58.956 { 00:37:58.956 "name": "BaseBdev2", 00:37:58.956 "aliases": [ 00:37:58.956 "2be001a5-9918-4686-9069-f0f5a10dea53" 00:37:58.956 ], 00:37:58.956 "product_name": "Malloc disk", 00:37:58.956 "block_size": 512, 00:37:58.956 "num_blocks": 65536, 00:37:58.956 "uuid": "2be001a5-9918-4686-9069-f0f5a10dea53", 00:37:58.956 "assigned_rate_limits": { 00:37:58.956 "rw_ios_per_sec": 0, 00:37:58.956 "rw_mbytes_per_sec": 0, 00:37:58.956 "r_mbytes_per_sec": 0, 00:37:58.956 "w_mbytes_per_sec": 0 00:37:58.956 }, 00:37:58.956 "claimed": true, 00:37:58.956 "claim_type": "exclusive_write", 00:37:58.956 "zoned": false, 00:37:58.956 "supported_io_types": { 00:37:58.956 "read": true, 00:37:58.956 "write": true, 00:37:58.956 "unmap": true, 00:37:58.956 "write_zeroes": true, 00:37:58.956 "flush": true, 00:37:58.956 "reset": true, 00:37:58.956 "compare": false, 00:37:58.956 "compare_and_write": false, 00:37:58.956 "abort": true, 00:37:58.956 "nvme_admin": false, 00:37:58.956 "nvme_io": false 00:37:58.956 }, 00:37:58.956 "memory_domains": [ 00:37:58.956 { 00:37:58.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:58.956 "dma_device_type": 2 00:37:58.956 } 00:37:58.956 ], 00:37:58.956 "driver_specific": {} 00:37:58.956 } 00:37:58.956 ] 00:37:58.956 16:15:03 -- common/autotest_common.sh@895 -- # return 0 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.956 16:15:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:59.215 16:15:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:37:59.215 "name": "Existed_Raid", 00:37:59.215 "uuid": "36005814-7f92-455a-b6fb-9ab5f7258a63", 00:37:59.215 "strip_size_kb": 64, 00:37:59.215 "state": "configuring", 00:37:59.215 "raid_level": "raid0", 00:37:59.215 "superblock": true, 00:37:59.215 "num_base_bdevs": 4, 00:37:59.215 "num_base_bdevs_discovered": 2, 00:37:59.215 "num_base_bdevs_operational": 4, 00:37:59.215 "base_bdevs_list": [ 00:37:59.215 { 00:37:59.215 "name": "BaseBdev1", 00:37:59.215 "uuid": "5747c3d6-7e59-468d-b5b8-145c65a27902", 00:37:59.215 "is_configured": true, 00:37:59.215 "data_offset": 2048, 00:37:59.215 "data_size": 63488 00:37:59.215 }, 00:37:59.215 { 00:37:59.215 "name": "BaseBdev2", 00:37:59.215 "uuid": "2be001a5-9918-4686-9069-f0f5a10dea53", 00:37:59.215 "is_configured": true, 00:37:59.215 "data_offset": 2048, 00:37:59.215 "data_size": 63488 00:37:59.215 }, 00:37:59.215 { 00:37:59.215 "name": "BaseBdev3", 00:37:59.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.215 "is_configured": false, 00:37:59.215 "data_offset": 0, 00:37:59.215 "data_size": 0 00:37:59.215 }, 00:37:59.215 { 00:37:59.215 "name": "BaseBdev4", 00:37:59.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.215 "is_configured": false, 00:37:59.215 "data_offset": 0, 00:37:59.215 "data_size": 0 00:37:59.215 } 00:37:59.215 ] 00:37:59.215 }' 00:37:59.215 16:15:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:37:59.215 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:37:59.782 16:15:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:38:00.041 [2024-07-22 16:15:04.085267] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:00.041 BaseBdev3 00:38:00.041 16:15:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:38:00.041 16:15:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:38:00.041 16:15:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:00.041 16:15:04 -- common/autotest_common.sh@889 -- # local i 00:38:00.041 16:15:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:00.041 16:15:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:00.041 16:15:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:00.299 16:15:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:00.558 [ 00:38:00.558 { 00:38:00.558 "name": "BaseBdev3", 00:38:00.558 "aliases": [ 00:38:00.558 "d0697b3f-e647-4f78-9cac-4c5a07d47ec8" 00:38:00.558 ], 00:38:00.558 "product_name": "Malloc disk", 00:38:00.558 "block_size": 512, 00:38:00.558 "num_blocks": 65536, 00:38:00.558 "uuid": "d0697b3f-e647-4f78-9cac-4c5a07d47ec8", 00:38:00.558 "assigned_rate_limits": { 00:38:00.558 "rw_ios_per_sec": 0, 00:38:00.558 "rw_mbytes_per_sec": 0, 00:38:00.558 "r_mbytes_per_sec": 0, 00:38:00.558 "w_mbytes_per_sec": 0 00:38:00.558 }, 00:38:00.558 "claimed": true, 00:38:00.558 "claim_type": "exclusive_write", 00:38:00.558 "zoned": false, 00:38:00.558 "supported_io_types": { 00:38:00.558 "read": true, 00:38:00.558 "write": true, 00:38:00.558 "unmap": true, 00:38:00.558 "write_zeroes": true, 00:38:00.558 "flush": true, 00:38:00.558 "reset": true, 00:38:00.558 "compare": false, 00:38:00.558 "compare_and_write": false, 00:38:00.558 "abort": true, 00:38:00.558 "nvme_admin": false, 00:38:00.558 "nvme_io": false 00:38:00.558 }, 00:38:00.558 "memory_domains": [ 00:38:00.558 { 00:38:00.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:00.558 "dma_device_type": 2 00:38:00.558 } 00:38:00.558 ], 00:38:00.558 "driver_specific": {} 00:38:00.558 } 00:38:00.558 ] 00:38:00.558 16:15:04 -- common/autotest_common.sh@895 -- # return 0 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.558 16:15:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:00.817 16:15:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:00.817 "name": "Existed_Raid", 00:38:00.817 "uuid": "36005814-7f92-455a-b6fb-9ab5f7258a63", 00:38:00.817 "strip_size_kb": 64, 00:38:00.817 "state": "configuring", 00:38:00.817 "raid_level": "raid0", 00:38:00.817 "superblock": true, 00:38:00.817 "num_base_bdevs": 4, 00:38:00.817 "num_base_bdevs_discovered": 3, 00:38:00.817 "num_base_bdevs_operational": 4, 00:38:00.817 "base_bdevs_list": [ 00:38:00.817 { 00:38:00.817 "name": "BaseBdev1", 00:38:00.817 "uuid": "5747c3d6-7e59-468d-b5b8-145c65a27902", 00:38:00.817 "is_configured": true, 00:38:00.817 "data_offset": 2048, 00:38:00.817 "data_size": 63488 00:38:00.817 }, 00:38:00.817 { 00:38:00.817 "name": "BaseBdev2", 00:38:00.817 "uuid": "2be001a5-9918-4686-9069-f0f5a10dea53", 00:38:00.817 "is_configured": true, 00:38:00.817 "data_offset": 2048, 00:38:00.817 "data_size": 63488 00:38:00.817 }, 00:38:00.817 { 00:38:00.817 "name": "BaseBdev3", 00:38:00.817 "uuid": "d0697b3f-e647-4f78-9cac-4c5a07d47ec8", 00:38:00.817 "is_configured": true, 00:38:00.817 "data_offset": 2048, 00:38:00.817 "data_size": 63488 00:38:00.817 }, 00:38:00.817 { 00:38:00.817 "name": "BaseBdev4", 00:38:00.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.817 "is_configured": false, 00:38:00.817 "data_offset": 0, 00:38:00.817 "data_size": 0 00:38:00.817 } 00:38:00.817 ] 00:38:00.817 }' 00:38:00.817 16:15:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:00.817 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:38:01.075 16:15:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:38:01.333 [2024-07-22 16:15:05.569621] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:01.333 [2024-07-22 16:15:05.570053] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:38:01.333 [2024-07-22 16:15:05.570074] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:01.333 [2024-07-22 16:15:05.570229] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:38:01.333 [2024-07-22 16:15:05.570628] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:38:01.333 [2024-07-22 16:15:05.570657] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:38:01.333 [2024-07-22 16:15:05.570849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:01.333 BaseBdev4 00:38:01.333 16:15:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:38:01.333 16:15:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:38:01.333 16:15:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:01.333 16:15:05 -- common/autotest_common.sh@889 -- # local i 00:38:01.333 16:15:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:01.333 16:15:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:01.333 16:15:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:01.899 16:15:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:01.899 [ 00:38:01.899 { 00:38:01.900 "name": "BaseBdev4", 00:38:01.900 "aliases": [ 00:38:01.900 "5300ecd6-50e0-4879-8a96-95f812ab5953" 00:38:01.900 ], 00:38:01.900 "product_name": "Malloc disk", 00:38:01.900 "block_size": 512, 00:38:01.900 "num_blocks": 65536, 00:38:01.900 "uuid": "5300ecd6-50e0-4879-8a96-95f812ab5953", 00:38:01.900 "assigned_rate_limits": { 00:38:01.900 "rw_ios_per_sec": 0, 00:38:01.900 "rw_mbytes_per_sec": 0, 00:38:01.900 "r_mbytes_per_sec": 0, 00:38:01.900 "w_mbytes_per_sec": 0 00:38:01.900 }, 00:38:01.900 "claimed": true, 00:38:01.900 "claim_type": "exclusive_write", 00:38:01.900 "zoned": false, 00:38:01.900 "supported_io_types": { 00:38:01.900 "read": true, 00:38:01.900 "write": true, 00:38:01.900 "unmap": true, 00:38:01.900 "write_zeroes": true, 00:38:01.900 "flush": true, 00:38:01.900 "reset": true, 00:38:01.900 "compare": false, 00:38:01.900 "compare_and_write": false, 00:38:01.900 "abort": true, 00:38:01.900 "nvme_admin": false, 00:38:01.900 "nvme_io": false 00:38:01.900 }, 00:38:01.900 "memory_domains": [ 00:38:01.900 { 00:38:01.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:01.900 "dma_device_type": 2 00:38:01.900 } 00:38:01.900 ], 00:38:01.900 "driver_specific": {} 00:38:01.900 } 00:38:01.900 ] 00:38:01.900 16:15:06 -- common/autotest_common.sh@895 -- # return 0 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:01.900 16:15:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:02.158 16:15:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:02.158 "name": "Existed_Raid", 00:38:02.158 "uuid": "36005814-7f92-455a-b6fb-9ab5f7258a63", 00:38:02.158 "strip_size_kb": 64, 00:38:02.158 "state": "online", 00:38:02.158 "raid_level": "raid0", 00:38:02.158 "superblock": true, 00:38:02.158 "num_base_bdevs": 4, 00:38:02.158 "num_base_bdevs_discovered": 4, 00:38:02.158 "num_base_bdevs_operational": 4, 00:38:02.158 "base_bdevs_list": [ 00:38:02.158 { 00:38:02.158 "name": "BaseBdev1", 00:38:02.158 "uuid": "5747c3d6-7e59-468d-b5b8-145c65a27902", 00:38:02.158 "is_configured": true, 00:38:02.158 "data_offset": 2048, 00:38:02.158 "data_size": 63488 00:38:02.158 }, 00:38:02.158 { 00:38:02.158 "name": "BaseBdev2", 00:38:02.158 "uuid": "2be001a5-9918-4686-9069-f0f5a10dea53", 00:38:02.158 "is_configured": true, 00:38:02.158 "data_offset": 2048, 00:38:02.158 "data_size": 63488 00:38:02.158 }, 00:38:02.158 { 00:38:02.158 "name": "BaseBdev3", 00:38:02.158 "uuid": "d0697b3f-e647-4f78-9cac-4c5a07d47ec8", 00:38:02.158 "is_configured": true, 00:38:02.158 "data_offset": 2048, 00:38:02.158 "data_size": 63488 00:38:02.158 }, 00:38:02.158 { 00:38:02.158 "name": "BaseBdev4", 00:38:02.158 "uuid": "5300ecd6-50e0-4879-8a96-95f812ab5953", 00:38:02.158 "is_configured": true, 00:38:02.158 "data_offset": 2048, 00:38:02.158 "data_size": 63488 00:38:02.158 } 00:38:02.158 ] 00:38:02.158 }' 00:38:02.158 16:15:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:02.158 16:15:06 -- common/autotest_common.sh@10 -- # set +x 00:38:02.730 16:15:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:03.004 [2024-07-22 16:15:07.014255] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:03.004 [2024-07-22 16:15:07.014563] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:03.004 [2024-07-22 16:15:07.014839] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.004 16:15:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:03.314 16:15:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:03.314 "name": "Existed_Raid", 00:38:03.314 "uuid": "36005814-7f92-455a-b6fb-9ab5f7258a63", 00:38:03.314 "strip_size_kb": 64, 00:38:03.314 "state": "offline", 00:38:03.314 "raid_level": "raid0", 00:38:03.314 "superblock": true, 00:38:03.314 "num_base_bdevs": 4, 00:38:03.314 "num_base_bdevs_discovered": 3, 00:38:03.314 "num_base_bdevs_operational": 3, 00:38:03.314 "base_bdevs_list": [ 00:38:03.314 { 00:38:03.314 "name": null, 00:38:03.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:03.314 "is_configured": false, 00:38:03.314 "data_offset": 2048, 00:38:03.314 "data_size": 63488 00:38:03.314 }, 00:38:03.314 { 00:38:03.314 "name": "BaseBdev2", 00:38:03.314 "uuid": "2be001a5-9918-4686-9069-f0f5a10dea53", 00:38:03.314 "is_configured": true, 00:38:03.314 "data_offset": 2048, 00:38:03.314 "data_size": 63488 00:38:03.314 }, 00:38:03.314 { 00:38:03.314 "name": "BaseBdev3", 00:38:03.314 "uuid": "d0697b3f-e647-4f78-9cac-4c5a07d47ec8", 00:38:03.314 "is_configured": true, 00:38:03.314 "data_offset": 2048, 00:38:03.314 "data_size": 63488 00:38:03.314 }, 00:38:03.314 { 00:38:03.314 "name": "BaseBdev4", 00:38:03.314 "uuid": "5300ecd6-50e0-4879-8a96-95f812ab5953", 00:38:03.314 "is_configured": true, 00:38:03.314 "data_offset": 2048, 00:38:03.314 "data_size": 63488 00:38:03.314 } 00:38:03.314 ] 00:38:03.314 }' 00:38:03.314 16:15:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:03.314 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:38:03.574 16:15:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:38:03.574 16:15:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:03.574 16:15:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.574 16:15:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:03.832 16:15:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:03.832 16:15:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:03.832 16:15:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:04.091 [2024-07-22 16:15:08.128557] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:04.091 16:15:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:04.091 16:15:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:04.091 16:15:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.091 16:15:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:04.349 16:15:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:04.349 16:15:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.349 16:15:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:38:04.607 [2024-07-22 16:15:08.759561] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:04.607 16:15:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:04.607 16:15:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:04.607 16:15:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.607 16:15:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:04.865 16:15:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:04.865 16:15:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:04.865 16:15:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:38:05.123 [2024-07-22 16:15:09.356682] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:05.123 [2024-07-22 16:15:09.356823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:38:05.382 16:15:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:05.382 16:15:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:05.382 16:15:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:05.382 16:15:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:38:05.640 16:15:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:38:05.640 16:15:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:38:05.640 16:15:09 -- bdev/bdev_raid.sh@287 -- # killprocess 76118 00:38:05.640 16:15:09 -- common/autotest_common.sh@926 -- # '[' -z 76118 ']' 00:38:05.640 16:15:09 -- common/autotest_common.sh@930 -- # kill -0 76118 00:38:05.640 16:15:09 -- common/autotest_common.sh@931 -- # uname 00:38:05.640 16:15:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:05.640 16:15:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76118 00:38:05.640 killing process with pid 76118 00:38:05.640 16:15:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:05.640 16:15:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:05.640 16:15:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76118' 00:38:05.640 16:15:09 -- common/autotest_common.sh@945 -- # kill 76118 00:38:05.640 16:15:09 -- common/autotest_common.sh@950 -- # wait 76118 00:38:05.640 [2024-07-22 16:15:09.751039] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:05.640 [2024-07-22 16:15:09.751219] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:38:07.018 00:38:07.018 real 0m14.741s 00:38:07.018 user 0m24.474s 00:38:07.018 sys 0m2.345s 00:38:07.018 16:15:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:07.018 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:38:07.018 ************************************ 00:38:07.018 END TEST raid_state_function_test_sb 00:38:07.018 ************************************ 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:38:07.018 16:15:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:38:07.018 16:15:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:07.018 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:38:07.018 ************************************ 00:38:07.018 START TEST raid_superblock_test 00:38:07.018 ************************************ 00:38:07.018 16:15:11 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:38:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@357 -- # raid_pid=76539 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@358 -- # waitforlisten 76539 /var/tmp/spdk-raid.sock 00:38:07.018 16:15:11 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:38:07.018 16:15:11 -- common/autotest_common.sh@819 -- # '[' -z 76539 ']' 00:38:07.018 16:15:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:07.018 16:15:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:07.018 16:15:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:07.018 16:15:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:07.018 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:38:07.018 [2024-07-22 16:15:11.212869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:38:07.018 [2024-07-22 16:15:11.215502] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76539 ] 00:38:07.277 [2024-07-22 16:15:11.390606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.536 [2024-07-22 16:15:11.700782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.795 [2024-07-22 16:15:11.922687] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:08.054 16:15:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:08.054 16:15:12 -- common/autotest_common.sh@852 -- # return 0 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:08.054 16:15:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:38:08.312 malloc1 00:38:08.312 16:15:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:08.571 [2024-07-22 16:15:12.720523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:08.571 [2024-07-22 16:15:12.720648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:08.571 [2024-07-22 16:15:12.720700] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:38:08.571 [2024-07-22 16:15:12.720716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:08.571 [2024-07-22 16:15:12.723784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:08.571 [2024-07-22 16:15:12.723833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:08.571 pt1 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:08.571 16:15:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:38:08.830 malloc2 00:38:08.830 16:15:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:09.088 [2024-07-22 16:15:13.328500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:09.088 [2024-07-22 16:15:13.328818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:09.088 [2024-07-22 16:15:13.328908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:38:09.088 [2024-07-22 16:15:13.329206] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:09.088 [2024-07-22 16:15:13.332334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:09.088 [2024-07-22 16:15:13.332379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:09.088 pt2 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:09.088 16:15:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:38:09.710 malloc3 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:09.710 [2024-07-22 16:15:13.880676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:09.710 [2024-07-22 16:15:13.880795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:09.710 [2024-07-22 16:15:13.880843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:38:09.710 [2024-07-22 16:15:13.880860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:09.710 [2024-07-22 16:15:13.884076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:09.710 [2024-07-22 16:15:13.884124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:09.710 pt3 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:38:09.710 16:15:13 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:09.711 16:15:13 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:09.711 16:15:13 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:09.711 16:15:13 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:38:09.993 malloc4 00:38:09.993 16:15:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:10.251 [2024-07-22 16:15:14.520013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:10.251 [2024-07-22 16:15:14.520132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:10.251 [2024-07-22 16:15:14.520198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:38:10.251 [2024-07-22 16:15:14.520216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:10.251 [2024-07-22 16:15:14.523206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:10.251 [2024-07-22 16:15:14.523254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:10.510 pt4 00:38:10.510 16:15:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:10.510 16:15:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:10.510 16:15:14 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:38:10.768 [2024-07-22 16:15:14.832208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:10.768 [2024-07-22 16:15:14.834768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:10.768 [2024-07-22 16:15:14.834904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:10.768 [2024-07-22 16:15:14.835146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:10.768 [2024-07-22 16:15:14.835491] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:38:10.768 [2024-07-22 16:15:14.835728] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:10.768 [2024-07-22 16:15:14.835915] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:38:10.768 [2024-07-22 16:15:14.836403] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:38:10.768 [2024-07-22 16:15:14.836432] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:38:10.768 [2024-07-22 16:15:14.836680] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:10.768 16:15:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:11.026 16:15:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:11.026 "name": "raid_bdev1", 00:38:11.026 "uuid": "1f108a1b-101d-43f5-9a49-1c62fd131798", 00:38:11.026 "strip_size_kb": 64, 00:38:11.026 "state": "online", 00:38:11.026 "raid_level": "raid0", 00:38:11.026 "superblock": true, 00:38:11.026 "num_base_bdevs": 4, 00:38:11.026 "num_base_bdevs_discovered": 4, 00:38:11.026 "num_base_bdevs_operational": 4, 00:38:11.026 "base_bdevs_list": [ 00:38:11.026 { 00:38:11.026 "name": "pt1", 00:38:11.026 "uuid": "59ac7729-7929-5e15-85a8-8a5abdb08558", 00:38:11.026 "is_configured": true, 00:38:11.026 "data_offset": 2048, 00:38:11.026 "data_size": 63488 00:38:11.026 }, 00:38:11.026 { 00:38:11.026 "name": "pt2", 00:38:11.026 "uuid": "478a4d11-3946-5753-8d8b-432017c4b591", 00:38:11.026 "is_configured": true, 00:38:11.026 "data_offset": 2048, 00:38:11.026 "data_size": 63488 00:38:11.026 }, 00:38:11.026 { 00:38:11.026 "name": "pt3", 00:38:11.026 "uuid": "6ed11ee9-8661-529d-9e7d-360d4a9ae5a6", 00:38:11.026 "is_configured": true, 00:38:11.026 "data_offset": 2048, 00:38:11.026 "data_size": 63488 00:38:11.026 }, 00:38:11.026 { 00:38:11.026 "name": "pt4", 00:38:11.026 "uuid": "62376abd-e0bb-55a4-b413-a0787a598313", 00:38:11.026 "is_configured": true, 00:38:11.026 "data_offset": 2048, 00:38:11.026 "data_size": 63488 00:38:11.026 } 00:38:11.026 ] 00:38:11.026 }' 00:38:11.026 16:15:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:11.026 16:15:15 -- common/autotest_common.sh@10 -- # set +x 00:38:11.285 16:15:15 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:11.285 16:15:15 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:38:11.544 [2024-07-22 16:15:15.665467] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:11.544 16:15:15 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1f108a1b-101d-43f5-9a49-1c62fd131798 00:38:11.544 16:15:15 -- bdev/bdev_raid.sh@380 -- # '[' -z 1f108a1b-101d-43f5-9a49-1c62fd131798 ']' 00:38:11.544 16:15:15 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:11.802 [2024-07-22 16:15:15.896931] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:11.802 [2024-07-22 16:15:15.897014] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:11.802 [2024-07-22 16:15:15.897142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:11.802 [2024-07-22 16:15:15.897246] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:11.802 [2024-07-22 16:15:15.897266] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:38:11.802 16:15:15 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:11.802 16:15:15 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:38:12.061 16:15:16 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:38:12.061 16:15:16 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:38:12.061 16:15:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.061 16:15:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:12.319 16:15:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.319 16:15:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:12.577 16:15:16 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.577 16:15:16 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:38:12.835 16:15:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.835 16:15:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:38:13.093 16:15:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:13.093 16:15:17 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:38:13.352 16:15:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:38:13.352 16:15:17 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:13.352 16:15:17 -- common/autotest_common.sh@640 -- # local es=0 00:38:13.352 16:15:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:13.352 16:15:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.352 16:15:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:13.352 16:15:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.352 16:15:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:13.352 16:15:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.352 16:15:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:13.352 16:15:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:13.352 16:15:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:13.352 16:15:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:13.610 [2024-07-22 16:15:17.774159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:13.610 [2024-07-22 16:15:17.776902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:13.610 [2024-07-22 16:15:17.777010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:38:13.610 [2024-07-22 16:15:17.777071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:38:13.610 [2024-07-22 16:15:17.777151] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:38:13.610 [2024-07-22 16:15:17.777230] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:38:13.610 [2024-07-22 16:15:17.777266] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:38:13.610 [2024-07-22 16:15:17.777294] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:38:13.610 [2024-07-22 16:15:17.777318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:13.610 [2024-07-22 16:15:17.777336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:38:13.610 request: 00:38:13.610 { 00:38:13.610 "name": "raid_bdev1", 00:38:13.610 "raid_level": "raid0", 00:38:13.610 "base_bdevs": [ 00:38:13.610 "malloc1", 00:38:13.610 "malloc2", 00:38:13.610 "malloc3", 00:38:13.610 "malloc4" 00:38:13.610 ], 00:38:13.610 "superblock": false, 00:38:13.610 "strip_size_kb": 64, 00:38:13.610 "method": "bdev_raid_create", 00:38:13.610 "req_id": 1 00:38:13.610 } 00:38:13.610 Got JSON-RPC error response 00:38:13.610 response: 00:38:13.610 { 00:38:13.610 "code": -17, 00:38:13.610 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:13.610 } 00:38:13.610 16:15:17 -- common/autotest_common.sh@643 -- # es=1 00:38:13.610 16:15:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:38:13.610 16:15:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:38:13.610 16:15:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:38:13.610 16:15:17 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:38:13.610 16:15:17 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:13.868 16:15:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:38:13.868 16:15:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:38:13.868 16:15:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:14.126 [2024-07-22 16:15:18.290360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:14.126 [2024-07-22 16:15:18.290505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:14.126 [2024-07-22 16:15:18.290554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:38:14.126 [2024-07-22 16:15:18.290570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:14.126 [2024-07-22 16:15:18.293769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:14.126 [2024-07-22 16:15:18.293832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:14.126 [2024-07-22 16:15:18.293977] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:38:14.126 [2024-07-22 16:15:18.294075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:14.126 pt1 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.126 16:15:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.384 16:15:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:14.384 "name": "raid_bdev1", 00:38:14.384 "uuid": "1f108a1b-101d-43f5-9a49-1c62fd131798", 00:38:14.384 "strip_size_kb": 64, 00:38:14.384 "state": "configuring", 00:38:14.384 "raid_level": "raid0", 00:38:14.384 "superblock": true, 00:38:14.384 "num_base_bdevs": 4, 00:38:14.384 "num_base_bdevs_discovered": 1, 00:38:14.384 "num_base_bdevs_operational": 4, 00:38:14.384 "base_bdevs_list": [ 00:38:14.384 { 00:38:14.384 "name": "pt1", 00:38:14.384 "uuid": "59ac7729-7929-5e15-85a8-8a5abdb08558", 00:38:14.384 "is_configured": true, 00:38:14.384 "data_offset": 2048, 00:38:14.384 "data_size": 63488 00:38:14.384 }, 00:38:14.384 { 00:38:14.384 "name": null, 00:38:14.384 "uuid": "478a4d11-3946-5753-8d8b-432017c4b591", 00:38:14.384 "is_configured": false, 00:38:14.384 "data_offset": 2048, 00:38:14.384 "data_size": 63488 00:38:14.384 }, 00:38:14.384 { 00:38:14.384 "name": null, 00:38:14.384 "uuid": "6ed11ee9-8661-529d-9e7d-360d4a9ae5a6", 00:38:14.384 "is_configured": false, 00:38:14.384 "data_offset": 2048, 00:38:14.384 "data_size": 63488 00:38:14.384 }, 00:38:14.384 { 00:38:14.384 "name": null, 00:38:14.385 "uuid": "62376abd-e0bb-55a4-b413-a0787a598313", 00:38:14.385 "is_configured": false, 00:38:14.385 "data_offset": 2048, 00:38:14.385 "data_size": 63488 00:38:14.385 } 00:38:14.385 ] 00:38:14.385 }' 00:38:14.385 16:15:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:14.385 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:38:14.642 16:15:18 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:38:14.642 16:15:18 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:14.900 [2024-07-22 16:15:19.130593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:14.900 [2024-07-22 16:15:19.130714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:14.900 [2024-07-22 16:15:19.130757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:38:14.900 [2024-07-22 16:15:19.130772] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:14.900 [2024-07-22 16:15:19.131363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:14.900 [2024-07-22 16:15:19.131396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:14.900 [2024-07-22 16:15:19.131505] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:38:14.900 [2024-07-22 16:15:19.131580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:14.900 pt2 00:38:14.900 16:15:19 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:15.157 [2024-07-22 16:15:19.410778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.416 16:15:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:15.675 16:15:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:15.675 "name": "raid_bdev1", 00:38:15.675 "uuid": "1f108a1b-101d-43f5-9a49-1c62fd131798", 00:38:15.675 "strip_size_kb": 64, 00:38:15.675 "state": "configuring", 00:38:15.675 "raid_level": "raid0", 00:38:15.675 "superblock": true, 00:38:15.675 "num_base_bdevs": 4, 00:38:15.675 "num_base_bdevs_discovered": 1, 00:38:15.675 "num_base_bdevs_operational": 4, 00:38:15.675 "base_bdevs_list": [ 00:38:15.675 { 00:38:15.675 "name": "pt1", 00:38:15.675 "uuid": "59ac7729-7929-5e15-85a8-8a5abdb08558", 00:38:15.675 "is_configured": true, 00:38:15.675 "data_offset": 2048, 00:38:15.675 "data_size": 63488 00:38:15.675 }, 00:38:15.675 { 00:38:15.675 "name": null, 00:38:15.675 "uuid": "478a4d11-3946-5753-8d8b-432017c4b591", 00:38:15.675 "is_configured": false, 00:38:15.675 "data_offset": 2048, 00:38:15.675 "data_size": 63488 00:38:15.675 }, 00:38:15.675 { 00:38:15.675 "name": null, 00:38:15.675 "uuid": "6ed11ee9-8661-529d-9e7d-360d4a9ae5a6", 00:38:15.675 "is_configured": false, 00:38:15.675 "data_offset": 2048, 00:38:15.675 "data_size": 63488 00:38:15.675 }, 00:38:15.675 { 00:38:15.675 "name": null, 00:38:15.675 "uuid": "62376abd-e0bb-55a4-b413-a0787a598313", 00:38:15.675 "is_configured": false, 00:38:15.675 "data_offset": 2048, 00:38:15.675 "data_size": 63488 00:38:15.675 } 00:38:15.675 ] 00:38:15.675 }' 00:38:15.675 16:15:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:15.675 16:15:19 -- common/autotest_common.sh@10 -- # set +x 00:38:15.933 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:38:15.933 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:15.933 16:15:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:16.192 [2024-07-22 16:15:20.294963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:16.192 [2024-07-22 16:15:20.295113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:16.192 [2024-07-22 16:15:20.295150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:38:16.192 [2024-07-22 16:15:20.295170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:16.192 [2024-07-22 16:15:20.295803] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:16.192 [2024-07-22 16:15:20.295843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:16.192 [2024-07-22 16:15:20.295958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:38:16.192 [2024-07-22 16:15:20.296016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:16.192 pt2 00:38:16.192 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:16.192 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:16.192 16:15:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:16.449 [2024-07-22 16:15:20.579065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:16.449 [2024-07-22 16:15:20.579174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:16.449 [2024-07-22 16:15:20.579212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:38:16.449 [2024-07-22 16:15:20.579231] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:16.449 [2024-07-22 16:15:20.579854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:16.449 [2024-07-22 16:15:20.579909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:16.449 [2024-07-22 16:15:20.580048] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:38:16.449 [2024-07-22 16:15:20.580095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:16.449 pt3 00:38:16.449 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:16.449 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:16.449 16:15:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:16.753 [2024-07-22 16:15:20.843127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:16.753 [2024-07-22 16:15:20.843248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:16.753 [2024-07-22 16:15:20.843289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:38:16.753 [2024-07-22 16:15:20.843314] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:16.753 [2024-07-22 16:15:20.843953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:16.753 [2024-07-22 16:15:20.844010] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:16.753 [2024-07-22 16:15:20.844160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:38:16.753 [2024-07-22 16:15:20.844210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:16.753 [2024-07-22 16:15:20.844394] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:38:16.753 [2024-07-22 16:15:20.844427] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:16.753 [2024-07-22 16:15:20.844540] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:38:16.753 [2024-07-22 16:15:20.844954] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:38:16.753 [2024-07-22 16:15:20.844979] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:38:16.753 [2024-07-22 16:15:20.845164] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:16.753 pt4 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:16.753 16:15:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.011 16:15:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:17.011 "name": "raid_bdev1", 00:38:17.011 "uuid": "1f108a1b-101d-43f5-9a49-1c62fd131798", 00:38:17.011 "strip_size_kb": 64, 00:38:17.011 "state": "online", 00:38:17.011 "raid_level": "raid0", 00:38:17.011 "superblock": true, 00:38:17.011 "num_base_bdevs": 4, 00:38:17.011 "num_base_bdevs_discovered": 4, 00:38:17.011 "num_base_bdevs_operational": 4, 00:38:17.011 "base_bdevs_list": [ 00:38:17.011 { 00:38:17.011 "name": "pt1", 00:38:17.011 "uuid": "59ac7729-7929-5e15-85a8-8a5abdb08558", 00:38:17.011 "is_configured": true, 00:38:17.011 "data_offset": 2048, 00:38:17.011 "data_size": 63488 00:38:17.011 }, 00:38:17.011 { 00:38:17.011 "name": "pt2", 00:38:17.011 "uuid": "478a4d11-3946-5753-8d8b-432017c4b591", 00:38:17.011 "is_configured": true, 00:38:17.011 "data_offset": 2048, 00:38:17.011 "data_size": 63488 00:38:17.011 }, 00:38:17.011 { 00:38:17.011 "name": "pt3", 00:38:17.011 "uuid": "6ed11ee9-8661-529d-9e7d-360d4a9ae5a6", 00:38:17.011 "is_configured": true, 00:38:17.011 "data_offset": 2048, 00:38:17.011 "data_size": 63488 00:38:17.011 }, 00:38:17.011 { 00:38:17.011 "name": "pt4", 00:38:17.011 "uuid": "62376abd-e0bb-55a4-b413-a0787a598313", 00:38:17.011 "is_configured": true, 00:38:17.012 "data_offset": 2048, 00:38:17.012 "data_size": 63488 00:38:17.012 } 00:38:17.012 ] 00:38:17.012 }' 00:38:17.012 16:15:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:17.012 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:38:17.269 16:15:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:17.269 16:15:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:38:17.527 [2024-07-22 16:15:21.703635] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:17.528 16:15:21 -- bdev/bdev_raid.sh@430 -- # '[' 1f108a1b-101d-43f5-9a49-1c62fd131798 '!=' 1f108a1b-101d-43f5-9a49-1c62fd131798 ']' 00:38:17.528 16:15:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:38:17.528 16:15:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:38:17.528 16:15:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:38:17.528 16:15:21 -- bdev/bdev_raid.sh@511 -- # killprocess 76539 00:38:17.528 16:15:21 -- common/autotest_common.sh@926 -- # '[' -z 76539 ']' 00:38:17.528 16:15:21 -- common/autotest_common.sh@930 -- # kill -0 76539 00:38:17.528 16:15:21 -- common/autotest_common.sh@931 -- # uname 00:38:17.528 16:15:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:17.528 16:15:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76539 00:38:17.528 killing process with pid 76539 00:38:17.528 16:15:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:17.528 16:15:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:17.528 16:15:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76539' 00:38:17.528 16:15:21 -- common/autotest_common.sh@945 -- # kill 76539 00:38:17.528 16:15:21 -- common/autotest_common.sh@950 -- # wait 76539 00:38:17.528 [2024-07-22 16:15:21.756963] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:17.528 [2024-07-22 16:15:21.757505] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:17.528 [2024-07-22 16:15:21.757643] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:17.528 [2024-07-22 16:15:21.757664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:38:18.094 [2024-07-22 16:15:22.190900] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:19.467 16:15:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:38:19.467 00:38:19.467 real 0m12.577s 00:38:19.467 user 0m20.401s 00:38:19.467 sys 0m1.940s 00:38:19.467 16:15:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:19.467 16:15:23 -- common/autotest_common.sh@10 -- # set +x 00:38:19.467 ************************************ 00:38:19.467 END TEST raid_superblock_test 00:38:19.467 ************************************ 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:38:19.752 16:15:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:38:19.752 16:15:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:19.752 16:15:23 -- common/autotest_common.sh@10 -- # set +x 00:38:19.752 ************************************ 00:38:19.752 START TEST raid_state_function_test 00:38:19.752 ************************************ 00:38:19.752 16:15:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=76858 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:19.752 Process raid pid: 76858 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76858' 00:38:19.752 16:15:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76858 /var/tmp/spdk-raid.sock 00:38:19.752 16:15:23 -- common/autotest_common.sh@819 -- # '[' -z 76858 ']' 00:38:19.752 16:15:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:19.752 16:15:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:19.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:19.752 16:15:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:19.752 16:15:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:19.752 16:15:23 -- common/autotest_common.sh@10 -- # set +x 00:38:19.752 [2024-07-22 16:15:23.854537] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:38:19.752 [2024-07-22 16:15:23.854688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:20.045 [2024-07-22 16:15:24.026836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.303 [2024-07-22 16:15:24.355658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.561 [2024-07-22 16:15:24.601626] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:20.561 16:15:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:20.561 16:15:24 -- common/autotest_common.sh@852 -- # return 0 00:38:20.561 16:15:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:20.819 [2024-07-22 16:15:25.023478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:20.819 [2024-07-22 16:15:25.023621] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:20.819 [2024-07-22 16:15:25.023640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:20.819 [2024-07-22 16:15:25.023658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:20.819 [2024-07-22 16:15:25.023669] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:20.819 [2024-07-22 16:15:25.023685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:20.819 [2024-07-22 16:15:25.023694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:20.819 [2024-07-22 16:15:25.023709] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:20.819 16:15:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.077 16:15:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:21.077 "name": "Existed_Raid", 00:38:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.077 "strip_size_kb": 64, 00:38:21.077 "state": "configuring", 00:38:21.077 "raid_level": "concat", 00:38:21.077 "superblock": false, 00:38:21.077 "num_base_bdevs": 4, 00:38:21.077 "num_base_bdevs_discovered": 0, 00:38:21.077 "num_base_bdevs_operational": 4, 00:38:21.077 "base_bdevs_list": [ 00:38:21.077 { 00:38:21.077 "name": "BaseBdev1", 00:38:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.077 "is_configured": false, 00:38:21.077 "data_offset": 0, 00:38:21.077 "data_size": 0 00:38:21.077 }, 00:38:21.077 { 00:38:21.077 "name": "BaseBdev2", 00:38:21.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.077 "is_configured": false, 00:38:21.077 "data_offset": 0, 00:38:21.077 "data_size": 0 00:38:21.078 }, 00:38:21.078 { 00:38:21.078 "name": "BaseBdev3", 00:38:21.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.078 "is_configured": false, 00:38:21.078 "data_offset": 0, 00:38:21.078 "data_size": 0 00:38:21.078 }, 00:38:21.078 { 00:38:21.078 "name": "BaseBdev4", 00:38:21.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.078 "is_configured": false, 00:38:21.078 "data_offset": 0, 00:38:21.078 "data_size": 0 00:38:21.078 } 00:38:21.078 ] 00:38:21.078 }' 00:38:21.078 16:15:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:21.078 16:15:25 -- common/autotest_common.sh@10 -- # set +x 00:38:21.644 16:15:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:21.902 [2024-07-22 16:15:25.963628] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:21.902 [2024-07-22 16:15:25.963699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:38:21.902 16:15:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:22.160 [2024-07-22 16:15:26.239726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:22.161 [2024-07-22 16:15:26.239813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:22.161 [2024-07-22 16:15:26.239830] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:22.161 [2024-07-22 16:15:26.239847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:22.161 [2024-07-22 16:15:26.239857] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:22.161 [2024-07-22 16:15:26.239887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:22.161 [2024-07-22 16:15:26.239896] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:22.161 [2024-07-22 16:15:26.239910] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:22.161 16:15:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:38:22.419 [2024-07-22 16:15:26.546834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:22.419 BaseBdev1 00:38:22.419 16:15:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:38:22.419 16:15:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:38:22.419 16:15:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:22.419 16:15:26 -- common/autotest_common.sh@889 -- # local i 00:38:22.419 16:15:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:22.419 16:15:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:22.419 16:15:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:22.677 16:15:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:22.936 [ 00:38:22.936 { 00:38:22.936 "name": "BaseBdev1", 00:38:22.936 "aliases": [ 00:38:22.936 "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3" 00:38:22.936 ], 00:38:22.936 "product_name": "Malloc disk", 00:38:22.936 "block_size": 512, 00:38:22.936 "num_blocks": 65536, 00:38:22.936 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:22.936 "assigned_rate_limits": { 00:38:22.936 "rw_ios_per_sec": 0, 00:38:22.936 "rw_mbytes_per_sec": 0, 00:38:22.936 "r_mbytes_per_sec": 0, 00:38:22.936 "w_mbytes_per_sec": 0 00:38:22.936 }, 00:38:22.936 "claimed": true, 00:38:22.936 "claim_type": "exclusive_write", 00:38:22.936 "zoned": false, 00:38:22.936 "supported_io_types": { 00:38:22.936 "read": true, 00:38:22.936 "write": true, 00:38:22.936 "unmap": true, 00:38:22.936 "write_zeroes": true, 00:38:22.936 "flush": true, 00:38:22.936 "reset": true, 00:38:22.936 "compare": false, 00:38:22.936 "compare_and_write": false, 00:38:22.936 "abort": true, 00:38:22.936 "nvme_admin": false, 00:38:22.936 "nvme_io": false 00:38:22.936 }, 00:38:22.936 "memory_domains": [ 00:38:22.936 { 00:38:22.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:22.936 "dma_device_type": 2 00:38:22.936 } 00:38:22.936 ], 00:38:22.936 "driver_specific": {} 00:38:22.936 } 00:38:22.936 ] 00:38:22.936 16:15:27 -- common/autotest_common.sh@895 -- # return 0 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:22.936 16:15:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:23.195 16:15:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:23.195 "name": "Existed_Raid", 00:38:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:23.195 "strip_size_kb": 64, 00:38:23.195 "state": "configuring", 00:38:23.195 "raid_level": "concat", 00:38:23.195 "superblock": false, 00:38:23.195 "num_base_bdevs": 4, 00:38:23.195 "num_base_bdevs_discovered": 1, 00:38:23.195 "num_base_bdevs_operational": 4, 00:38:23.195 "base_bdevs_list": [ 00:38:23.195 { 00:38:23.195 "name": "BaseBdev1", 00:38:23.195 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:23.195 "is_configured": true, 00:38:23.195 "data_offset": 0, 00:38:23.195 "data_size": 65536 00:38:23.195 }, 00:38:23.195 { 00:38:23.195 "name": "BaseBdev2", 00:38:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:23.195 "is_configured": false, 00:38:23.195 "data_offset": 0, 00:38:23.195 "data_size": 0 00:38:23.195 }, 00:38:23.195 { 00:38:23.195 "name": "BaseBdev3", 00:38:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:23.195 "is_configured": false, 00:38:23.195 "data_offset": 0, 00:38:23.195 "data_size": 0 00:38:23.195 }, 00:38:23.195 { 00:38:23.195 "name": "BaseBdev4", 00:38:23.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:23.195 "is_configured": false, 00:38:23.195 "data_offset": 0, 00:38:23.195 "data_size": 0 00:38:23.195 } 00:38:23.195 ] 00:38:23.195 }' 00:38:23.195 16:15:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:23.195 16:15:27 -- common/autotest_common.sh@10 -- # set +x 00:38:23.762 16:15:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:23.762 [2024-07-22 16:15:27.987404] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:23.762 [2024-07-22 16:15:27.987781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:38:23.762 16:15:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:38:23.762 16:15:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:24.029 [2024-07-22 16:15:28.259638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:24.029 [2024-07-22 16:15:28.262474] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:24.029 [2024-07-22 16:15:28.262554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:24.029 [2024-07-22 16:15:28.262575] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:24.029 [2024-07-22 16:15:28.262592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:24.029 [2024-07-22 16:15:28.262611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:24.029 [2024-07-22 16:15:28.262636] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:24.029 16:15:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:24.303 16:15:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:24.303 "name": "Existed_Raid", 00:38:24.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.303 "strip_size_kb": 64, 00:38:24.303 "state": "configuring", 00:38:24.303 "raid_level": "concat", 00:38:24.303 "superblock": false, 00:38:24.303 "num_base_bdevs": 4, 00:38:24.303 "num_base_bdevs_discovered": 1, 00:38:24.303 "num_base_bdevs_operational": 4, 00:38:24.303 "base_bdevs_list": [ 00:38:24.303 { 00:38:24.303 "name": "BaseBdev1", 00:38:24.303 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:24.303 "is_configured": true, 00:38:24.303 "data_offset": 0, 00:38:24.303 "data_size": 65536 00:38:24.303 }, 00:38:24.303 { 00:38:24.303 "name": "BaseBdev2", 00:38:24.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.303 "is_configured": false, 00:38:24.303 "data_offset": 0, 00:38:24.303 "data_size": 0 00:38:24.303 }, 00:38:24.303 { 00:38:24.303 "name": "BaseBdev3", 00:38:24.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.303 "is_configured": false, 00:38:24.303 "data_offset": 0, 00:38:24.303 "data_size": 0 00:38:24.303 }, 00:38:24.303 { 00:38:24.303 "name": "BaseBdev4", 00:38:24.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:24.303 "is_configured": false, 00:38:24.303 "data_offset": 0, 00:38:24.303 "data_size": 0 00:38:24.303 } 00:38:24.303 ] 00:38:24.303 }' 00:38:24.303 16:15:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:24.303 16:15:28 -- common/autotest_common.sh@10 -- # set +x 00:38:24.869 16:15:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:38:25.127 [2024-07-22 16:15:29.168961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:25.127 BaseBdev2 00:38:25.127 16:15:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:38:25.128 16:15:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:38:25.128 16:15:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:25.128 16:15:29 -- common/autotest_common.sh@889 -- # local i 00:38:25.128 16:15:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:25.128 16:15:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:25.128 16:15:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:25.386 16:15:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:25.643 [ 00:38:25.643 { 00:38:25.643 "name": "BaseBdev2", 00:38:25.643 "aliases": [ 00:38:25.643 "b6e1c6b1-8294-48d7-8621-f759dc81f6e6" 00:38:25.643 ], 00:38:25.643 "product_name": "Malloc disk", 00:38:25.643 "block_size": 512, 00:38:25.643 "num_blocks": 65536, 00:38:25.643 "uuid": "b6e1c6b1-8294-48d7-8621-f759dc81f6e6", 00:38:25.643 "assigned_rate_limits": { 00:38:25.643 "rw_ios_per_sec": 0, 00:38:25.643 "rw_mbytes_per_sec": 0, 00:38:25.643 "r_mbytes_per_sec": 0, 00:38:25.643 "w_mbytes_per_sec": 0 00:38:25.643 }, 00:38:25.643 "claimed": true, 00:38:25.643 "claim_type": "exclusive_write", 00:38:25.643 "zoned": false, 00:38:25.643 "supported_io_types": { 00:38:25.643 "read": true, 00:38:25.643 "write": true, 00:38:25.643 "unmap": true, 00:38:25.643 "write_zeroes": true, 00:38:25.643 "flush": true, 00:38:25.643 "reset": true, 00:38:25.643 "compare": false, 00:38:25.643 "compare_and_write": false, 00:38:25.643 "abort": true, 00:38:25.643 "nvme_admin": false, 00:38:25.643 "nvme_io": false 00:38:25.643 }, 00:38:25.643 "memory_domains": [ 00:38:25.643 { 00:38:25.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:25.644 "dma_device_type": 2 00:38:25.644 } 00:38:25.644 ], 00:38:25.644 "driver_specific": {} 00:38:25.644 } 00:38:25.644 ] 00:38:25.644 16:15:29 -- common/autotest_common.sh@895 -- # return 0 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.644 16:15:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:25.901 16:15:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:25.901 "name": "Existed_Raid", 00:38:25.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:25.901 "strip_size_kb": 64, 00:38:25.901 "state": "configuring", 00:38:25.901 "raid_level": "concat", 00:38:25.901 "superblock": false, 00:38:25.901 "num_base_bdevs": 4, 00:38:25.901 "num_base_bdevs_discovered": 2, 00:38:25.901 "num_base_bdevs_operational": 4, 00:38:25.901 "base_bdevs_list": [ 00:38:25.901 { 00:38:25.901 "name": "BaseBdev1", 00:38:25.901 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:25.901 "is_configured": true, 00:38:25.901 "data_offset": 0, 00:38:25.901 "data_size": 65536 00:38:25.901 }, 00:38:25.901 { 00:38:25.901 "name": "BaseBdev2", 00:38:25.901 "uuid": "b6e1c6b1-8294-48d7-8621-f759dc81f6e6", 00:38:25.901 "is_configured": true, 00:38:25.901 "data_offset": 0, 00:38:25.901 "data_size": 65536 00:38:25.901 }, 00:38:25.901 { 00:38:25.902 "name": "BaseBdev3", 00:38:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:25.902 "is_configured": false, 00:38:25.902 "data_offset": 0, 00:38:25.902 "data_size": 0 00:38:25.902 }, 00:38:25.902 { 00:38:25.902 "name": "BaseBdev4", 00:38:25.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:25.902 "is_configured": false, 00:38:25.902 "data_offset": 0, 00:38:25.902 "data_size": 0 00:38:25.902 } 00:38:25.902 ] 00:38:25.902 }' 00:38:25.902 16:15:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:25.902 16:15:30 -- common/autotest_common.sh@10 -- # set +x 00:38:26.160 16:15:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:38:26.418 [2024-07-22 16:15:30.616759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:26.419 BaseBdev3 00:38:26.419 16:15:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:38:26.419 16:15:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:38:26.419 16:15:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:26.419 16:15:30 -- common/autotest_common.sh@889 -- # local i 00:38:26.419 16:15:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:26.419 16:15:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:26.419 16:15:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:26.677 16:15:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:26.936 [ 00:38:26.936 { 00:38:26.936 "name": "BaseBdev3", 00:38:26.936 "aliases": [ 00:38:26.936 "237895d9-c19a-492e-94ea-49a034535b34" 00:38:26.936 ], 00:38:26.936 "product_name": "Malloc disk", 00:38:26.936 "block_size": 512, 00:38:26.936 "num_blocks": 65536, 00:38:26.936 "uuid": "237895d9-c19a-492e-94ea-49a034535b34", 00:38:26.936 "assigned_rate_limits": { 00:38:26.936 "rw_ios_per_sec": 0, 00:38:26.936 "rw_mbytes_per_sec": 0, 00:38:26.936 "r_mbytes_per_sec": 0, 00:38:26.936 "w_mbytes_per_sec": 0 00:38:26.936 }, 00:38:26.936 "claimed": true, 00:38:26.936 "claim_type": "exclusive_write", 00:38:26.936 "zoned": false, 00:38:26.936 "supported_io_types": { 00:38:26.936 "read": true, 00:38:26.936 "write": true, 00:38:26.936 "unmap": true, 00:38:26.936 "write_zeroes": true, 00:38:26.936 "flush": true, 00:38:26.936 "reset": true, 00:38:26.936 "compare": false, 00:38:26.936 "compare_and_write": false, 00:38:26.936 "abort": true, 00:38:26.936 "nvme_admin": false, 00:38:26.936 "nvme_io": false 00:38:26.936 }, 00:38:26.936 "memory_domains": [ 00:38:26.936 { 00:38:26.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.936 "dma_device_type": 2 00:38:26.936 } 00:38:26.936 ], 00:38:26.936 "driver_specific": {} 00:38:26.936 } 00:38:26.936 ] 00:38:26.936 16:15:31 -- common/autotest_common.sh@895 -- # return 0 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:26.936 16:15:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.194 16:15:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:27.194 "name": "Existed_Raid", 00:38:27.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.194 "strip_size_kb": 64, 00:38:27.194 "state": "configuring", 00:38:27.194 "raid_level": "concat", 00:38:27.194 "superblock": false, 00:38:27.194 "num_base_bdevs": 4, 00:38:27.194 "num_base_bdevs_discovered": 3, 00:38:27.194 "num_base_bdevs_operational": 4, 00:38:27.194 "base_bdevs_list": [ 00:38:27.194 { 00:38:27.194 "name": "BaseBdev1", 00:38:27.194 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:27.194 "is_configured": true, 00:38:27.194 "data_offset": 0, 00:38:27.194 "data_size": 65536 00:38:27.194 }, 00:38:27.194 { 00:38:27.194 "name": "BaseBdev2", 00:38:27.194 "uuid": "b6e1c6b1-8294-48d7-8621-f759dc81f6e6", 00:38:27.194 "is_configured": true, 00:38:27.194 "data_offset": 0, 00:38:27.194 "data_size": 65536 00:38:27.194 }, 00:38:27.194 { 00:38:27.194 "name": "BaseBdev3", 00:38:27.194 "uuid": "237895d9-c19a-492e-94ea-49a034535b34", 00:38:27.194 "is_configured": true, 00:38:27.194 "data_offset": 0, 00:38:27.194 "data_size": 65536 00:38:27.194 }, 00:38:27.194 { 00:38:27.194 "name": "BaseBdev4", 00:38:27.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.194 "is_configured": false, 00:38:27.194 "data_offset": 0, 00:38:27.194 "data_size": 0 00:38:27.194 } 00:38:27.194 ] 00:38:27.194 }' 00:38:27.194 16:15:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:27.194 16:15:31 -- common/autotest_common.sh@10 -- # set +x 00:38:27.775 16:15:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:38:28.035 [2024-07-22 16:15:32.056682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:28.035 BaseBdev4 00:38:28.035 [2024-07-22 16:15:32.057061] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:38:28.035 [2024-07-22 16:15:32.057097] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:38:28.035 [2024-07-22 16:15:32.057280] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:38:28.035 [2024-07-22 16:15:32.057724] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:38:28.035 [2024-07-22 16:15:32.057763] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:38:28.035 [2024-07-22 16:15:32.058074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:28.035 16:15:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:38:28.035 16:15:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:38:28.035 16:15:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:28.035 16:15:32 -- common/autotest_common.sh@889 -- # local i 00:38:28.035 16:15:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:28.035 16:15:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:28.035 16:15:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:28.291 16:15:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:28.549 [ 00:38:28.549 { 00:38:28.549 "name": "BaseBdev4", 00:38:28.549 "aliases": [ 00:38:28.549 "34b896e2-cef5-44f0-8b5d-00697d943376" 00:38:28.549 ], 00:38:28.549 "product_name": "Malloc disk", 00:38:28.549 "block_size": 512, 00:38:28.549 "num_blocks": 65536, 00:38:28.549 "uuid": "34b896e2-cef5-44f0-8b5d-00697d943376", 00:38:28.549 "assigned_rate_limits": { 00:38:28.549 "rw_ios_per_sec": 0, 00:38:28.549 "rw_mbytes_per_sec": 0, 00:38:28.549 "r_mbytes_per_sec": 0, 00:38:28.549 "w_mbytes_per_sec": 0 00:38:28.549 }, 00:38:28.549 "claimed": true, 00:38:28.549 "claim_type": "exclusive_write", 00:38:28.549 "zoned": false, 00:38:28.549 "supported_io_types": { 00:38:28.549 "read": true, 00:38:28.549 "write": true, 00:38:28.549 "unmap": true, 00:38:28.549 "write_zeroes": true, 00:38:28.549 "flush": true, 00:38:28.549 "reset": true, 00:38:28.549 "compare": false, 00:38:28.549 "compare_and_write": false, 00:38:28.549 "abort": true, 00:38:28.549 "nvme_admin": false, 00:38:28.549 "nvme_io": false 00:38:28.549 }, 00:38:28.549 "memory_domains": [ 00:38:28.549 { 00:38:28.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:28.549 "dma_device_type": 2 00:38:28.549 } 00:38:28.549 ], 00:38:28.549 "driver_specific": {} 00:38:28.549 } 00:38:28.549 ] 00:38:28.549 16:15:32 -- common/autotest_common.sh@895 -- # return 0 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:28.549 16:15:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.807 16:15:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:28.807 "name": "Existed_Raid", 00:38:28.807 "uuid": "96c6f8a3-5cfb-4e83-b657-e6c4d11d5ea6", 00:38:28.807 "strip_size_kb": 64, 00:38:28.807 "state": "online", 00:38:28.807 "raid_level": "concat", 00:38:28.807 "superblock": false, 00:38:28.807 "num_base_bdevs": 4, 00:38:28.807 "num_base_bdevs_discovered": 4, 00:38:28.807 "num_base_bdevs_operational": 4, 00:38:28.807 "base_bdevs_list": [ 00:38:28.807 { 00:38:28.807 "name": "BaseBdev1", 00:38:28.807 "uuid": "f4dd5d70-1c70-46c7-9c33-8f6cac88a0a3", 00:38:28.807 "is_configured": true, 00:38:28.807 "data_offset": 0, 00:38:28.807 "data_size": 65536 00:38:28.807 }, 00:38:28.807 { 00:38:28.807 "name": "BaseBdev2", 00:38:28.807 "uuid": "b6e1c6b1-8294-48d7-8621-f759dc81f6e6", 00:38:28.807 "is_configured": true, 00:38:28.807 "data_offset": 0, 00:38:28.807 "data_size": 65536 00:38:28.807 }, 00:38:28.807 { 00:38:28.807 "name": "BaseBdev3", 00:38:28.807 "uuid": "237895d9-c19a-492e-94ea-49a034535b34", 00:38:28.807 "is_configured": true, 00:38:28.807 "data_offset": 0, 00:38:28.807 "data_size": 65536 00:38:28.807 }, 00:38:28.807 { 00:38:28.807 "name": "BaseBdev4", 00:38:28.807 "uuid": "34b896e2-cef5-44f0-8b5d-00697d943376", 00:38:28.807 "is_configured": true, 00:38:28.807 "data_offset": 0, 00:38:28.807 "data_size": 65536 00:38:28.807 } 00:38:28.807 ] 00:38:28.807 }' 00:38:28.807 16:15:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:28.807 16:15:32 -- common/autotest_common.sh@10 -- # set +x 00:38:29.065 16:15:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:29.323 [2024-07-22 16:15:33.385254] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:29.323 [2024-07-22 16:15:33.385451] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:29.323 [2024-07-22 16:15:33.385659] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.323 16:15:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:29.582 16:15:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:29.582 "name": "Existed_Raid", 00:38:29.582 "uuid": "96c6f8a3-5cfb-4e83-b657-e6c4d11d5ea6", 00:38:29.582 "strip_size_kb": 64, 00:38:29.582 "state": "offline", 00:38:29.582 "raid_level": "concat", 00:38:29.582 "superblock": false, 00:38:29.582 "num_base_bdevs": 4, 00:38:29.582 "num_base_bdevs_discovered": 3, 00:38:29.582 "num_base_bdevs_operational": 3, 00:38:29.582 "base_bdevs_list": [ 00:38:29.582 { 00:38:29.582 "name": null, 00:38:29.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:29.582 "is_configured": false, 00:38:29.582 "data_offset": 0, 00:38:29.582 "data_size": 65536 00:38:29.582 }, 00:38:29.582 { 00:38:29.582 "name": "BaseBdev2", 00:38:29.582 "uuid": "b6e1c6b1-8294-48d7-8621-f759dc81f6e6", 00:38:29.582 "is_configured": true, 00:38:29.582 "data_offset": 0, 00:38:29.582 "data_size": 65536 00:38:29.582 }, 00:38:29.582 { 00:38:29.582 "name": "BaseBdev3", 00:38:29.582 "uuid": "237895d9-c19a-492e-94ea-49a034535b34", 00:38:29.582 "is_configured": true, 00:38:29.582 "data_offset": 0, 00:38:29.582 "data_size": 65536 00:38:29.582 }, 00:38:29.582 { 00:38:29.582 "name": "BaseBdev4", 00:38:29.582 "uuid": "34b896e2-cef5-44f0-8b5d-00697d943376", 00:38:29.582 "is_configured": true, 00:38:29.582 "data_offset": 0, 00:38:29.582 "data_size": 65536 00:38:29.582 } 00:38:29.582 ] 00:38:29.582 }' 00:38:29.582 16:15:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:29.582 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:38:29.840 16:15:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:38:29.841 16:15:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:29.841 16:15:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.841 16:15:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:30.407 16:15:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:30.407 16:15:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:30.407 16:15:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:30.407 [2024-07-22 16:15:34.651208] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:30.666 16:15:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:30.666 16:15:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:30.666 16:15:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.666 16:15:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:30.924 16:15:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:30.924 16:15:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:30.924 16:15:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:38:31.183 [2024-07-22 16:15:35.265483] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:31.183 16:15:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:31.183 16:15:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:31.183 16:15:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:31.183 16:15:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:31.441 16:15:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:31.442 16:15:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:31.442 16:15:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:38:31.700 [2024-07-22 16:15:35.881391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:31.700 [2024-07-22 16:15:35.881484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:38:31.957 16:15:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:31.957 16:15:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:31.957 16:15:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:31.957 16:15:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:38:32.215 16:15:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:38:32.215 16:15:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:38:32.215 16:15:36 -- bdev/bdev_raid.sh@287 -- # killprocess 76858 00:38:32.215 16:15:36 -- common/autotest_common.sh@926 -- # '[' -z 76858 ']' 00:38:32.215 16:15:36 -- common/autotest_common.sh@930 -- # kill -0 76858 00:38:32.215 16:15:36 -- common/autotest_common.sh@931 -- # uname 00:38:32.215 16:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:32.215 16:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76858 00:38:32.215 killing process with pid 76858 00:38:32.215 16:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:32.215 16:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:32.215 16:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76858' 00:38:32.215 16:15:36 -- common/autotest_common.sh@945 -- # kill 76858 00:38:32.215 [2024-07-22 16:15:36.299156] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:32.215 16:15:36 -- common/autotest_common.sh@950 -- # wait 76858 00:38:32.215 [2024-07-22 16:15:36.299303] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:38:33.589 00:38:33.589 real 0m13.890s 00:38:33.589 user 0m22.919s 00:38:33.589 sys 0m2.309s 00:38:33.589 16:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:33.589 ************************************ 00:38:33.589 END TEST raid_state_function_test 00:38:33.589 ************************************ 00:38:33.589 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:38:33.589 16:15:37 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:38:33.589 16:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:33.589 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:38:33.589 ************************************ 00:38:33.589 START TEST raid_state_function_test_sb 00:38:33.589 ************************************ 00:38:33.589 16:15:37 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:38:33.589 16:15:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=77263 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 77263' 00:38:33.590 Process raid pid: 77263 00:38:33.590 16:15:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 77263 /var/tmp/spdk-raid.sock 00:38:33.590 16:15:37 -- common/autotest_common.sh@819 -- # '[' -z 77263 ']' 00:38:33.590 16:15:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:33.590 16:15:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:33.590 16:15:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:33.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:33.590 16:15:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:33.590 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:38:33.590 [2024-07-22 16:15:37.817570] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:38:33.590 [2024-07-22 16:15:37.817779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.847 [2024-07-22 16:15:37.996928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.105 [2024-07-22 16:15:38.260022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.363 [2024-07-22 16:15:38.487027] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:34.629 16:15:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:34.629 16:15:38 -- common/autotest_common.sh@852 -- # return 0 00:38:34.629 16:15:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:34.886 [2024-07-22 16:15:39.058451] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:34.886 [2024-07-22 16:15:39.058527] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:34.886 [2024-07-22 16:15:39.058544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:34.886 [2024-07-22 16:15:39.058560] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:34.887 [2024-07-22 16:15:39.058574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:34.887 [2024-07-22 16:15:39.058590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:34.887 [2024-07-22 16:15:39.058600] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:34.887 [2024-07-22 16:15:39.058615] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.887 16:15:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:35.144 16:15:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:35.144 "name": "Existed_Raid", 00:38:35.144 "uuid": "a2579d0a-ca47-4a70-be60-8076327cb74c", 00:38:35.144 "strip_size_kb": 64, 00:38:35.144 "state": "configuring", 00:38:35.144 "raid_level": "concat", 00:38:35.144 "superblock": true, 00:38:35.144 "num_base_bdevs": 4, 00:38:35.144 "num_base_bdevs_discovered": 0, 00:38:35.144 "num_base_bdevs_operational": 4, 00:38:35.144 "base_bdevs_list": [ 00:38:35.144 { 00:38:35.144 "name": "BaseBdev1", 00:38:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.144 "is_configured": false, 00:38:35.144 "data_offset": 0, 00:38:35.144 "data_size": 0 00:38:35.144 }, 00:38:35.144 { 00:38:35.144 "name": "BaseBdev2", 00:38:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.144 "is_configured": false, 00:38:35.144 "data_offset": 0, 00:38:35.144 "data_size": 0 00:38:35.144 }, 00:38:35.144 { 00:38:35.144 "name": "BaseBdev3", 00:38:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.144 "is_configured": false, 00:38:35.144 "data_offset": 0, 00:38:35.144 "data_size": 0 00:38:35.144 }, 00:38:35.144 { 00:38:35.144 "name": "BaseBdev4", 00:38:35.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.144 "is_configured": false, 00:38:35.144 "data_offset": 0, 00:38:35.144 "data_size": 0 00:38:35.144 } 00:38:35.144 ] 00:38:35.144 }' 00:38:35.144 16:15:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:35.144 16:15:39 -- common/autotest_common.sh@10 -- # set +x 00:38:35.710 16:15:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:35.710 [2024-07-22 16:15:39.918570] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:35.710 [2024-07-22 16:15:39.918805] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:38:35.710 16:15:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:35.969 [2024-07-22 16:15:40.170842] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:35.969 [2024-07-22 16:15:40.170924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:35.969 [2024-07-22 16:15:40.170948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:35.969 [2024-07-22 16:15:40.170967] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:35.969 [2024-07-22 16:15:40.170977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:35.969 [2024-07-22 16:15:40.171013] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:35.969 [2024-07-22 16:15:40.171026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:35.969 [2024-07-22 16:15:40.171041] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:35.969 16:15:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:38:36.227 [2024-07-22 16:15:40.494398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:36.227 BaseBdev1 00:38:36.486 16:15:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:38:36.486 16:15:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:38:36.486 16:15:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:36.486 16:15:40 -- common/autotest_common.sh@889 -- # local i 00:38:36.486 16:15:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:36.486 16:15:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:36.486 16:15:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:36.744 16:15:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:36.744 [ 00:38:36.744 { 00:38:36.744 "name": "BaseBdev1", 00:38:36.744 "aliases": [ 00:38:36.744 "6cc74a84-2b37-4a27-8aa5-740e5cffa4c2" 00:38:36.744 ], 00:38:36.744 "product_name": "Malloc disk", 00:38:36.744 "block_size": 512, 00:38:36.744 "num_blocks": 65536, 00:38:36.744 "uuid": "6cc74a84-2b37-4a27-8aa5-740e5cffa4c2", 00:38:36.744 "assigned_rate_limits": { 00:38:36.744 "rw_ios_per_sec": 0, 00:38:36.744 "rw_mbytes_per_sec": 0, 00:38:36.744 "r_mbytes_per_sec": 0, 00:38:36.744 "w_mbytes_per_sec": 0 00:38:36.744 }, 00:38:36.744 "claimed": true, 00:38:36.744 "claim_type": "exclusive_write", 00:38:36.744 "zoned": false, 00:38:36.744 "supported_io_types": { 00:38:36.744 "read": true, 00:38:36.744 "write": true, 00:38:36.744 "unmap": true, 00:38:36.744 "write_zeroes": true, 00:38:36.744 "flush": true, 00:38:36.744 "reset": true, 00:38:36.744 "compare": false, 00:38:36.744 "compare_and_write": false, 00:38:36.744 "abort": true, 00:38:36.744 "nvme_admin": false, 00:38:36.744 "nvme_io": false 00:38:36.744 }, 00:38:36.744 "memory_domains": [ 00:38:36.744 { 00:38:36.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:36.744 "dma_device_type": 2 00:38:36.744 } 00:38:36.744 ], 00:38:36.744 "driver_specific": {} 00:38:36.744 } 00:38:36.744 ] 00:38:37.003 16:15:41 -- common/autotest_common.sh@895 -- # return 0 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:37.003 "name": "Existed_Raid", 00:38:37.003 "uuid": "139a3db8-89e5-4c97-baf5-9b062b87f9a4", 00:38:37.003 "strip_size_kb": 64, 00:38:37.003 "state": "configuring", 00:38:37.003 "raid_level": "concat", 00:38:37.003 "superblock": true, 00:38:37.003 "num_base_bdevs": 4, 00:38:37.003 "num_base_bdevs_discovered": 1, 00:38:37.003 "num_base_bdevs_operational": 4, 00:38:37.003 "base_bdevs_list": [ 00:38:37.003 { 00:38:37.003 "name": "BaseBdev1", 00:38:37.003 "uuid": "6cc74a84-2b37-4a27-8aa5-740e5cffa4c2", 00:38:37.003 "is_configured": true, 00:38:37.003 "data_offset": 2048, 00:38:37.003 "data_size": 63488 00:38:37.003 }, 00:38:37.003 { 00:38:37.003 "name": "BaseBdev2", 00:38:37.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:37.003 "is_configured": false, 00:38:37.003 "data_offset": 0, 00:38:37.003 "data_size": 0 00:38:37.003 }, 00:38:37.003 { 00:38:37.003 "name": "BaseBdev3", 00:38:37.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:37.003 "is_configured": false, 00:38:37.003 "data_offset": 0, 00:38:37.003 "data_size": 0 00:38:37.003 }, 00:38:37.003 { 00:38:37.003 "name": "BaseBdev4", 00:38:37.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:37.003 "is_configured": false, 00:38:37.003 "data_offset": 0, 00:38:37.003 "data_size": 0 00:38:37.003 } 00:38:37.003 ] 00:38:37.003 }' 00:38:37.003 16:15:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:37.003 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:38:37.570 16:15:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:37.570 [2024-07-22 16:15:41.818959] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:37.570 [2024-07-22 16:15:41.819122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:38:37.570 16:15:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:38:37.570 16:15:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:38.136 16:15:42 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:38:38.463 BaseBdev1 00:38:38.463 16:15:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:38:38.463 16:15:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:38:38.463 16:15:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:38.463 16:15:42 -- common/autotest_common.sh@889 -- # local i 00:38:38.463 16:15:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:38.463 16:15:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:38.463 16:15:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:38.736 16:15:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:38.993 [ 00:38:38.993 { 00:38:38.993 "name": "BaseBdev1", 00:38:38.993 "aliases": [ 00:38:38.993 "fe4d7576-55bb-4be9-979f-9e46c4960d76" 00:38:38.993 ], 00:38:38.993 "product_name": "Malloc disk", 00:38:38.993 "block_size": 512, 00:38:38.993 "num_blocks": 65536, 00:38:38.993 "uuid": "fe4d7576-55bb-4be9-979f-9e46c4960d76", 00:38:38.993 "assigned_rate_limits": { 00:38:38.993 "rw_ios_per_sec": 0, 00:38:38.993 "rw_mbytes_per_sec": 0, 00:38:38.993 "r_mbytes_per_sec": 0, 00:38:38.993 "w_mbytes_per_sec": 0 00:38:38.993 }, 00:38:38.993 "claimed": false, 00:38:38.993 "zoned": false, 00:38:38.993 "supported_io_types": { 00:38:38.993 "read": true, 00:38:38.993 "write": true, 00:38:38.993 "unmap": true, 00:38:38.993 "write_zeroes": true, 00:38:38.993 "flush": true, 00:38:38.993 "reset": true, 00:38:38.993 "compare": false, 00:38:38.993 "compare_and_write": false, 00:38:38.993 "abort": true, 00:38:38.993 "nvme_admin": false, 00:38:38.993 "nvme_io": false 00:38:38.993 }, 00:38:38.993 "memory_domains": [ 00:38:38.993 { 00:38:38.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:38.993 "dma_device_type": 2 00:38:38.993 } 00:38:38.993 ], 00:38:38.993 "driver_specific": {} 00:38:38.993 } 00:38:38.993 ] 00:38:38.993 16:15:43 -- common/autotest_common.sh@895 -- # return 0 00:38:38.993 16:15:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:38:39.250 [2024-07-22 16:15:43.375207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:39.250 [2024-07-22 16:15:43.377682] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:39.250 [2024-07-22 16:15:43.377752] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:39.250 [2024-07-22 16:15:43.377777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:38:39.250 [2024-07-22 16:15:43.377798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:38:39.250 [2024-07-22 16:15:43.377808] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:38:39.250 [2024-07-22 16:15:43.377827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:39.250 16:15:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:39.508 16:15:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:39.508 "name": "Existed_Raid", 00:38:39.508 "uuid": "a2f82bc9-2e6b-4841-beeb-44492fe8f5cc", 00:38:39.508 "strip_size_kb": 64, 00:38:39.508 "state": "configuring", 00:38:39.508 "raid_level": "concat", 00:38:39.508 "superblock": true, 00:38:39.508 "num_base_bdevs": 4, 00:38:39.508 "num_base_bdevs_discovered": 1, 00:38:39.508 "num_base_bdevs_operational": 4, 00:38:39.508 "base_bdevs_list": [ 00:38:39.508 { 00:38:39.508 "name": "BaseBdev1", 00:38:39.508 "uuid": "fe4d7576-55bb-4be9-979f-9e46c4960d76", 00:38:39.508 "is_configured": true, 00:38:39.508 "data_offset": 2048, 00:38:39.508 "data_size": 63488 00:38:39.508 }, 00:38:39.508 { 00:38:39.508 "name": "BaseBdev2", 00:38:39.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:39.508 "is_configured": false, 00:38:39.508 "data_offset": 0, 00:38:39.508 "data_size": 0 00:38:39.508 }, 00:38:39.508 { 00:38:39.508 "name": "BaseBdev3", 00:38:39.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:39.508 "is_configured": false, 00:38:39.508 "data_offset": 0, 00:38:39.508 "data_size": 0 00:38:39.508 }, 00:38:39.508 { 00:38:39.508 "name": "BaseBdev4", 00:38:39.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:39.508 "is_configured": false, 00:38:39.508 "data_offset": 0, 00:38:39.508 "data_size": 0 00:38:39.508 } 00:38:39.508 ] 00:38:39.508 }' 00:38:39.508 16:15:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:39.508 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:38:40.074 16:15:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:38:40.333 [2024-07-22 16:15:44.377487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:40.333 BaseBdev2 00:38:40.333 16:15:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:38:40.333 16:15:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:38:40.333 16:15:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:40.333 16:15:44 -- common/autotest_common.sh@889 -- # local i 00:38:40.333 16:15:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:40.333 16:15:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:40.333 16:15:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:40.591 16:15:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:40.849 [ 00:38:40.849 { 00:38:40.849 "name": "BaseBdev2", 00:38:40.849 "aliases": [ 00:38:40.849 "527e7a58-d7eb-4560-9034-523dd2bfe5dd" 00:38:40.849 ], 00:38:40.849 "product_name": "Malloc disk", 00:38:40.849 "block_size": 512, 00:38:40.849 "num_blocks": 65536, 00:38:40.849 "uuid": "527e7a58-d7eb-4560-9034-523dd2bfe5dd", 00:38:40.849 "assigned_rate_limits": { 00:38:40.849 "rw_ios_per_sec": 0, 00:38:40.849 "rw_mbytes_per_sec": 0, 00:38:40.849 "r_mbytes_per_sec": 0, 00:38:40.849 "w_mbytes_per_sec": 0 00:38:40.849 }, 00:38:40.849 "claimed": true, 00:38:40.849 "claim_type": "exclusive_write", 00:38:40.849 "zoned": false, 00:38:40.849 "supported_io_types": { 00:38:40.849 "read": true, 00:38:40.849 "write": true, 00:38:40.849 "unmap": true, 00:38:40.849 "write_zeroes": true, 00:38:40.849 "flush": true, 00:38:40.849 "reset": true, 00:38:40.849 "compare": false, 00:38:40.849 "compare_and_write": false, 00:38:40.849 "abort": true, 00:38:40.849 "nvme_admin": false, 00:38:40.849 "nvme_io": false 00:38:40.849 }, 00:38:40.849 "memory_domains": [ 00:38:40.849 { 00:38:40.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:40.849 "dma_device_type": 2 00:38:40.849 } 00:38:40.849 ], 00:38:40.849 "driver_specific": {} 00:38:40.849 } 00:38:40.849 ] 00:38:40.849 16:15:44 -- common/autotest_common.sh@895 -- # return 0 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:40.849 16:15:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:41.107 16:15:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:41.107 "name": "Existed_Raid", 00:38:41.107 "uuid": "a2f82bc9-2e6b-4841-beeb-44492fe8f5cc", 00:38:41.107 "strip_size_kb": 64, 00:38:41.107 "state": "configuring", 00:38:41.107 "raid_level": "concat", 00:38:41.107 "superblock": true, 00:38:41.107 "num_base_bdevs": 4, 00:38:41.107 "num_base_bdevs_discovered": 2, 00:38:41.107 "num_base_bdevs_operational": 4, 00:38:41.107 "base_bdevs_list": [ 00:38:41.107 { 00:38:41.107 "name": "BaseBdev1", 00:38:41.107 "uuid": "fe4d7576-55bb-4be9-979f-9e46c4960d76", 00:38:41.107 "is_configured": true, 00:38:41.107 "data_offset": 2048, 00:38:41.107 "data_size": 63488 00:38:41.107 }, 00:38:41.107 { 00:38:41.107 "name": "BaseBdev2", 00:38:41.107 "uuid": "527e7a58-d7eb-4560-9034-523dd2bfe5dd", 00:38:41.107 "is_configured": true, 00:38:41.107 "data_offset": 2048, 00:38:41.107 "data_size": 63488 00:38:41.107 }, 00:38:41.107 { 00:38:41.107 "name": "BaseBdev3", 00:38:41.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:41.107 "is_configured": false, 00:38:41.107 "data_offset": 0, 00:38:41.107 "data_size": 0 00:38:41.107 }, 00:38:41.107 { 00:38:41.107 "name": "BaseBdev4", 00:38:41.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:41.107 "is_configured": false, 00:38:41.107 "data_offset": 0, 00:38:41.107 "data_size": 0 00:38:41.107 } 00:38:41.107 ] 00:38:41.107 }' 00:38:41.107 16:15:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:41.107 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:38:41.366 16:15:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:38:41.932 [2024-07-22 16:15:45.918279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:38:41.932 BaseBdev3 00:38:41.932 16:15:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:38:41.932 16:15:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:38:41.932 16:15:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:41.932 16:15:45 -- common/autotest_common.sh@889 -- # local i 00:38:41.932 16:15:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:41.932 16:15:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:41.932 16:15:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:42.190 16:15:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:38:42.190 [ 00:38:42.190 { 00:38:42.190 "name": "BaseBdev3", 00:38:42.190 "aliases": [ 00:38:42.190 "146dbd23-2482-4620-b332-22e519a93c46" 00:38:42.190 ], 00:38:42.190 "product_name": "Malloc disk", 00:38:42.190 "block_size": 512, 00:38:42.190 "num_blocks": 65536, 00:38:42.190 "uuid": "146dbd23-2482-4620-b332-22e519a93c46", 00:38:42.190 "assigned_rate_limits": { 00:38:42.190 "rw_ios_per_sec": 0, 00:38:42.190 "rw_mbytes_per_sec": 0, 00:38:42.190 "r_mbytes_per_sec": 0, 00:38:42.190 "w_mbytes_per_sec": 0 00:38:42.190 }, 00:38:42.190 "claimed": true, 00:38:42.190 "claim_type": "exclusive_write", 00:38:42.190 "zoned": false, 00:38:42.190 "supported_io_types": { 00:38:42.190 "read": true, 00:38:42.190 "write": true, 00:38:42.190 "unmap": true, 00:38:42.190 "write_zeroes": true, 00:38:42.190 "flush": true, 00:38:42.190 "reset": true, 00:38:42.190 "compare": false, 00:38:42.190 "compare_and_write": false, 00:38:42.190 "abort": true, 00:38:42.190 "nvme_admin": false, 00:38:42.190 "nvme_io": false 00:38:42.190 }, 00:38:42.190 "memory_domains": [ 00:38:42.190 { 00:38:42.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:42.190 "dma_device_type": 2 00:38:42.190 } 00:38:42.190 ], 00:38:42.190 "driver_specific": {} 00:38:42.190 } 00:38:42.190 ] 00:38:42.190 16:15:46 -- common/autotest_common.sh@895 -- # return 0 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:42.190 16:15:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:42.449 16:15:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:42.449 "name": "Existed_Raid", 00:38:42.449 "uuid": "a2f82bc9-2e6b-4841-beeb-44492fe8f5cc", 00:38:42.449 "strip_size_kb": 64, 00:38:42.449 "state": "configuring", 00:38:42.449 "raid_level": "concat", 00:38:42.449 "superblock": true, 00:38:42.449 "num_base_bdevs": 4, 00:38:42.449 "num_base_bdevs_discovered": 3, 00:38:42.449 "num_base_bdevs_operational": 4, 00:38:42.449 "base_bdevs_list": [ 00:38:42.449 { 00:38:42.449 "name": "BaseBdev1", 00:38:42.449 "uuid": "fe4d7576-55bb-4be9-979f-9e46c4960d76", 00:38:42.449 "is_configured": true, 00:38:42.449 "data_offset": 2048, 00:38:42.449 "data_size": 63488 00:38:42.449 }, 00:38:42.449 { 00:38:42.449 "name": "BaseBdev2", 00:38:42.449 "uuid": "527e7a58-d7eb-4560-9034-523dd2bfe5dd", 00:38:42.449 "is_configured": true, 00:38:42.449 "data_offset": 2048, 00:38:42.449 "data_size": 63488 00:38:42.449 }, 00:38:42.449 { 00:38:42.449 "name": "BaseBdev3", 00:38:42.449 "uuid": "146dbd23-2482-4620-b332-22e519a93c46", 00:38:42.449 "is_configured": true, 00:38:42.449 "data_offset": 2048, 00:38:42.449 "data_size": 63488 00:38:42.449 }, 00:38:42.449 { 00:38:42.449 "name": "BaseBdev4", 00:38:42.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.449 "is_configured": false, 00:38:42.449 "data_offset": 0, 00:38:42.449 "data_size": 0 00:38:42.449 } 00:38:42.449 ] 00:38:42.449 }' 00:38:42.449 16:15:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:42.449 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:38:43.016 16:15:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:38:43.274 [2024-07-22 16:15:47.337587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:38:43.274 BaseBdev4 00:38:43.274 [2024-07-22 16:15:47.339471] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:38:43.274 [2024-07-22 16:15:47.339499] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:43.274 [2024-07-22 16:15:47.339653] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:38:43.274 [2024-07-22 16:15:47.340054] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:38:43.274 [2024-07-22 16:15:47.340079] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:38:43.274 [2024-07-22 16:15:47.340266] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:43.274 16:15:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:38:43.274 16:15:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:38:43.274 16:15:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:38:43.274 16:15:47 -- common/autotest_common.sh@889 -- # local i 00:38:43.274 16:15:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:38:43.274 16:15:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:38:43.274 16:15:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:43.533 16:15:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:38:43.791 [ 00:38:43.791 { 00:38:43.791 "name": "BaseBdev4", 00:38:43.791 "aliases": [ 00:38:43.791 "2feca14b-3862-42aa-87a1-683bae4638f5" 00:38:43.791 ], 00:38:43.791 "product_name": "Malloc disk", 00:38:43.791 "block_size": 512, 00:38:43.791 "num_blocks": 65536, 00:38:43.791 "uuid": "2feca14b-3862-42aa-87a1-683bae4638f5", 00:38:43.791 "assigned_rate_limits": { 00:38:43.791 "rw_ios_per_sec": 0, 00:38:43.791 "rw_mbytes_per_sec": 0, 00:38:43.792 "r_mbytes_per_sec": 0, 00:38:43.792 "w_mbytes_per_sec": 0 00:38:43.792 }, 00:38:43.792 "claimed": true, 00:38:43.792 "claim_type": "exclusive_write", 00:38:43.792 "zoned": false, 00:38:43.792 "supported_io_types": { 00:38:43.792 "read": true, 00:38:43.792 "write": true, 00:38:43.792 "unmap": true, 00:38:43.792 "write_zeroes": true, 00:38:43.792 "flush": true, 00:38:43.792 "reset": true, 00:38:43.792 "compare": false, 00:38:43.792 "compare_and_write": false, 00:38:43.792 "abort": true, 00:38:43.792 "nvme_admin": false, 00:38:43.792 "nvme_io": false 00:38:43.792 }, 00:38:43.792 "memory_domains": [ 00:38:43.792 { 00:38:43.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:43.792 "dma_device_type": 2 00:38:43.792 } 00:38:43.792 ], 00:38:43.792 "driver_specific": {} 00:38:43.792 } 00:38:43.792 ] 00:38:43.792 16:15:47 -- common/autotest_common.sh@895 -- # return 0 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.792 16:15:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:44.050 16:15:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:44.050 "name": "Existed_Raid", 00:38:44.050 "uuid": "a2f82bc9-2e6b-4841-beeb-44492fe8f5cc", 00:38:44.050 "strip_size_kb": 64, 00:38:44.050 "state": "online", 00:38:44.050 "raid_level": "concat", 00:38:44.050 "superblock": true, 00:38:44.050 "num_base_bdevs": 4, 00:38:44.050 "num_base_bdevs_discovered": 4, 00:38:44.050 "num_base_bdevs_operational": 4, 00:38:44.050 "base_bdevs_list": [ 00:38:44.050 { 00:38:44.050 "name": "BaseBdev1", 00:38:44.050 "uuid": "fe4d7576-55bb-4be9-979f-9e46c4960d76", 00:38:44.050 "is_configured": true, 00:38:44.050 "data_offset": 2048, 00:38:44.050 "data_size": 63488 00:38:44.050 }, 00:38:44.050 { 00:38:44.050 "name": "BaseBdev2", 00:38:44.050 "uuid": "527e7a58-d7eb-4560-9034-523dd2bfe5dd", 00:38:44.050 "is_configured": true, 00:38:44.050 "data_offset": 2048, 00:38:44.050 "data_size": 63488 00:38:44.050 }, 00:38:44.050 { 00:38:44.050 "name": "BaseBdev3", 00:38:44.050 "uuid": "146dbd23-2482-4620-b332-22e519a93c46", 00:38:44.050 "is_configured": true, 00:38:44.050 "data_offset": 2048, 00:38:44.050 "data_size": 63488 00:38:44.050 }, 00:38:44.050 { 00:38:44.050 "name": "BaseBdev4", 00:38:44.050 "uuid": "2feca14b-3862-42aa-87a1-683bae4638f5", 00:38:44.050 "is_configured": true, 00:38:44.050 "data_offset": 2048, 00:38:44.050 "data_size": 63488 00:38:44.050 } 00:38:44.050 ] 00:38:44.050 }' 00:38:44.050 16:15:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:44.050 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:38:44.309 16:15:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:44.567 [2024-07-22 16:15:48.714549] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:44.567 [2024-07-22 16:15:48.714633] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:44.567 [2024-07-22 16:15:48.714760] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:44.824 16:15:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:44.824 16:15:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:44.824 "name": "Existed_Raid", 00:38:44.824 "uuid": "a2f82bc9-2e6b-4841-beeb-44492fe8f5cc", 00:38:44.824 "strip_size_kb": 64, 00:38:44.824 "state": "offline", 00:38:44.824 "raid_level": "concat", 00:38:44.824 "superblock": true, 00:38:44.824 "num_base_bdevs": 4, 00:38:44.824 "num_base_bdevs_discovered": 3, 00:38:44.824 "num_base_bdevs_operational": 3, 00:38:44.824 "base_bdevs_list": [ 00:38:44.824 { 00:38:44.824 "name": null, 00:38:44.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:44.824 "is_configured": false, 00:38:44.824 "data_offset": 2048, 00:38:44.824 "data_size": 63488 00:38:44.824 }, 00:38:44.824 { 00:38:44.824 "name": "BaseBdev2", 00:38:44.825 "uuid": "527e7a58-d7eb-4560-9034-523dd2bfe5dd", 00:38:44.825 "is_configured": true, 00:38:44.825 "data_offset": 2048, 00:38:44.825 "data_size": 63488 00:38:44.825 }, 00:38:44.825 { 00:38:44.825 "name": "BaseBdev3", 00:38:44.825 "uuid": "146dbd23-2482-4620-b332-22e519a93c46", 00:38:44.825 "is_configured": true, 00:38:44.825 "data_offset": 2048, 00:38:44.825 "data_size": 63488 00:38:44.825 }, 00:38:44.825 { 00:38:44.825 "name": "BaseBdev4", 00:38:44.825 "uuid": "2feca14b-3862-42aa-87a1-683bae4638f5", 00:38:44.825 "is_configured": true, 00:38:44.825 "data_offset": 2048, 00:38:44.825 "data_size": 63488 00:38:44.825 } 00:38:44.825 ] 00:38:44.825 }' 00:38:44.825 16:15:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:44.825 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:45.438 16:15:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:45.697 [2024-07-22 16:15:49.893516] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:45.956 16:15:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:45.956 16:15:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:45.956 16:15:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:45.956 16:15:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.214 16:15:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:46.215 16:15:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:46.215 16:15:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:38:46.472 [2024-07-22 16:15:50.526129] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:38:46.472 16:15:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:46.472 16:15:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:46.472 16:15:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.472 16:15:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:38:46.731 16:15:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:38:46.731 16:15:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:46.731 16:15:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:38:46.989 [2024-07-22 16:15:51.094436] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:38:46.990 [2024-07-22 16:15:51.094523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:38:46.990 16:15:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:38:46.990 16:15:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:38:46.990 16:15:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:38:46.990 16:15:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:47.247 16:15:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:38:47.247 16:15:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:38:47.247 16:15:51 -- bdev/bdev_raid.sh@287 -- # killprocess 77263 00:38:47.247 16:15:51 -- common/autotest_common.sh@926 -- # '[' -z 77263 ']' 00:38:47.247 16:15:51 -- common/autotest_common.sh@930 -- # kill -0 77263 00:38:47.247 16:15:51 -- common/autotest_common.sh@931 -- # uname 00:38:47.247 16:15:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:47.247 16:15:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77263 00:38:47.247 killing process with pid 77263 00:38:47.247 16:15:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:47.247 16:15:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:47.247 16:15:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77263' 00:38:47.247 16:15:51 -- common/autotest_common.sh@945 -- # kill 77263 00:38:47.247 [2024-07-22 16:15:51.510111] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:47.247 16:15:51 -- common/autotest_common.sh@950 -- # wait 77263 00:38:47.247 [2024-07-22 16:15:51.510247] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:48.617 16:15:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:38:48.617 00:38:48.617 real 0m15.115s 00:38:48.617 user 0m25.129s 00:38:48.617 sys 0m2.407s 00:38:48.617 16:15:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:48.617 ************************************ 00:38:48.617 END TEST raid_state_function_test_sb 00:38:48.617 ************************************ 00:38:48.617 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:38:48.877 16:15:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:38:48.877 16:15:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:38:48.877 16:15:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:38:48.877 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:38:48.877 ************************************ 00:38:48.877 START TEST raid_superblock_test 00:38:48.877 ************************************ 00:38:48.877 16:15:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=77692 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:38:48.878 16:15:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 77692 /var/tmp/spdk-raid.sock 00:38:48.878 16:15:52 -- common/autotest_common.sh@819 -- # '[' -z 77692 ']' 00:38:48.878 16:15:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:48.878 16:15:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:38:48.878 16:15:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:48.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:48.878 16:15:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:38:48.878 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:38:48.878 [2024-07-22 16:15:52.992746] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:38:48.878 [2024-07-22 16:15:52.993694] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77692 ] 00:38:49.137 [2024-07-22 16:15:53.172622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.395 [2024-07-22 16:15:53.486643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:49.653 [2024-07-22 16:15:53.736723] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:49.912 16:15:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:38:49.912 16:15:53 -- common/autotest_common.sh@852 -- # return 0 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:49.912 16:15:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:38:50.169 malloc1 00:38:50.169 16:15:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:50.426 [2024-07-22 16:15:54.465961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:50.427 [2024-07-22 16:15:54.466128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.427 [2024-07-22 16:15:54.466183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:38:50.427 [2024-07-22 16:15:54.466214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.427 [2024-07-22 16:15:54.469394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.427 [2024-07-22 16:15:54.469436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:50.427 pt1 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:50.427 16:15:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:38:50.684 malloc2 00:38:50.684 16:15:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:50.943 [2024-07-22 16:15:54.988162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:50.943 [2024-07-22 16:15:54.988279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:50.943 [2024-07-22 16:15:54.988324] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:38:50.943 [2024-07-22 16:15:54.988342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:50.943 [2024-07-22 16:15:54.991297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:50.943 [2024-07-22 16:15:54.991341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:50.943 pt2 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:50.943 16:15:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:38:51.212 malloc3 00:38:51.212 16:15:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:51.486 [2024-07-22 16:15:55.500394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:51.486 [2024-07-22 16:15:55.500494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:51.486 [2024-07-22 16:15:55.500539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:38:51.486 [2024-07-22 16:15:55.500557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:51.486 [2024-07-22 16:15:55.503725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:51.486 [2024-07-22 16:15:55.503768] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:51.486 pt3 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:51.486 16:15:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:38:51.744 malloc4 00:38:51.744 16:15:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:51.744 [2024-07-22 16:15:55.995481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:51.744 [2024-07-22 16:15:55.995621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:51.744 [2024-07-22 16:15:55.995675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:38:51.744 [2024-07-22 16:15:55.995693] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:51.744 [2024-07-22 16:15:55.998639] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:51.744 [2024-07-22 16:15:55.998682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:51.744 pt4 00:38:51.744 16:15:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:38:51.744 16:15:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:38:51.744 16:15:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:38:52.002 [2024-07-22 16:15:56.227826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:52.002 [2024-07-22 16:15:56.230585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:52.002 [2024-07-22 16:15:56.230723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:52.002 [2024-07-22 16:15:56.230798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:52.002 [2024-07-22 16:15:56.231126] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:38:52.002 [2024-07-22 16:15:56.231145] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:52.002 [2024-07-22 16:15:56.231316] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:38:52.002 [2024-07-22 16:15:56.231836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:38:52.002 [2024-07-22 16:15:56.231865] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:38:52.002 [2024-07-22 16:15:56.232142] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.003 16:15:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.261 16:15:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:52.261 "name": "raid_bdev1", 00:38:52.261 "uuid": "1117fb41-06cb-4291-99b0-6e0642f2b10a", 00:38:52.261 "strip_size_kb": 64, 00:38:52.261 "state": "online", 00:38:52.261 "raid_level": "concat", 00:38:52.261 "superblock": true, 00:38:52.261 "num_base_bdevs": 4, 00:38:52.261 "num_base_bdevs_discovered": 4, 00:38:52.261 "num_base_bdevs_operational": 4, 00:38:52.261 "base_bdevs_list": [ 00:38:52.261 { 00:38:52.261 "name": "pt1", 00:38:52.261 "uuid": "08413443-1d7b-52ef-b635-f86e0c05c15e", 00:38:52.261 "is_configured": true, 00:38:52.261 "data_offset": 2048, 00:38:52.261 "data_size": 63488 00:38:52.261 }, 00:38:52.261 { 00:38:52.261 "name": "pt2", 00:38:52.261 "uuid": "de1dbcec-2833-5482-875d-96f7725c0119", 00:38:52.261 "is_configured": true, 00:38:52.261 "data_offset": 2048, 00:38:52.261 "data_size": 63488 00:38:52.261 }, 00:38:52.261 { 00:38:52.261 "name": "pt3", 00:38:52.261 "uuid": "90eada2e-4328-59d8-869b-0eb1a696be8a", 00:38:52.261 "is_configured": true, 00:38:52.261 "data_offset": 2048, 00:38:52.261 "data_size": 63488 00:38:52.261 }, 00:38:52.261 { 00:38:52.261 "name": "pt4", 00:38:52.261 "uuid": "61c7c31b-3dc5-5dec-8f5d-6c77e9fde5f4", 00:38:52.261 "is_configured": true, 00:38:52.261 "data_offset": 2048, 00:38:52.261 "data_size": 63488 00:38:52.261 } 00:38:52.261 ] 00:38:52.261 }' 00:38:52.261 16:15:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:52.261 16:15:56 -- common/autotest_common.sh@10 -- # set +x 00:38:52.827 16:15:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:38:52.827 16:15:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:53.085 [2024-07-22 16:15:57.148768] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:53.085 16:15:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1117fb41-06cb-4291-99b0-6e0642f2b10a 00:38:53.085 16:15:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 1117fb41-06cb-4291-99b0-6e0642f2b10a ']' 00:38:53.085 16:15:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:53.343 [2024-07-22 16:15:57.392490] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:53.343 [2024-07-22 16:15:57.392555] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:53.343 [2024-07-22 16:15:57.392669] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:53.343 [2024-07-22 16:15:57.392781] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:53.343 [2024-07-22 16:15:57.392799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:38:53.343 16:15:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:38:53.343 16:15:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:53.602 16:15:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:38:53.602 16:15:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:38:53.602 16:15:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:53.602 16:15:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:53.861 16:15:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:53.861 16:15:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:54.121 16:15:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:54.121 16:15:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:38:54.446 16:15:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:38:54.446 16:15:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:38:54.704 16:15:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:38:54.704 16:15:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:54.961 16:15:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:38:54.961 16:15:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:54.961 16:15:59 -- common/autotest_common.sh@640 -- # local es=0 00:38:54.961 16:15:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:54.961 16:15:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.961 16:15:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:54.961 16:15:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.961 16:15:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:54.961 16:15:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.961 16:15:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:38:54.961 16:15:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:54.961 16:15:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:54.961 16:15:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:38:55.218 [2024-07-22 16:15:59.340917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:55.219 [2024-07-22 16:15:59.343285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:55.219 [2024-07-22 16:15:59.343366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:38:55.219 [2024-07-22 16:15:59.343423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:38:55.219 [2024-07-22 16:15:59.343495] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:38:55.219 [2024-07-22 16:15:59.343566] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:38:55.219 [2024-07-22 16:15:59.343615] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:38:55.219 [2024-07-22 16:15:59.343646] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:38:55.219 [2024-07-22 16:15:59.343671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:55.219 [2024-07-22 16:15:59.343695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:38:55.219 request: 00:38:55.219 { 00:38:55.219 "name": "raid_bdev1", 00:38:55.219 "raid_level": "concat", 00:38:55.219 "base_bdevs": [ 00:38:55.219 "malloc1", 00:38:55.219 "malloc2", 00:38:55.219 "malloc3", 00:38:55.219 "malloc4" 00:38:55.219 ], 00:38:55.219 "superblock": false, 00:38:55.219 "strip_size_kb": 64, 00:38:55.219 "method": "bdev_raid_create", 00:38:55.219 "req_id": 1 00:38:55.219 } 00:38:55.219 Got JSON-RPC error response 00:38:55.219 response: 00:38:55.219 { 00:38:55.219 "code": -17, 00:38:55.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:55.219 } 00:38:55.219 16:15:59 -- common/autotest_common.sh@643 -- # es=1 00:38:55.219 16:15:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:38:55.219 16:15:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:38:55.219 16:15:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:38:55.219 16:15:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:38:55.219 16:15:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.477 16:15:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:38:55.477 16:15:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:38:55.477 16:15:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:55.735 [2024-07-22 16:15:59.840971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:55.735 [2024-07-22 16:15:59.841074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:55.735 [2024-07-22 16:15:59.841111] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:38:55.735 [2024-07-22 16:15:59.841128] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:55.735 [2024-07-22 16:15:59.843898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:55.735 [2024-07-22 16:15:59.843941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:55.735 [2024-07-22 16:15:59.844093] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:38:55.735 [2024-07-22 16:15:59.844166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:55.735 pt1 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.735 16:15:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.994 16:16:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:55.994 "name": "raid_bdev1", 00:38:55.994 "uuid": "1117fb41-06cb-4291-99b0-6e0642f2b10a", 00:38:55.994 "strip_size_kb": 64, 00:38:55.994 "state": "configuring", 00:38:55.994 "raid_level": "concat", 00:38:55.994 "superblock": true, 00:38:55.994 "num_base_bdevs": 4, 00:38:55.994 "num_base_bdevs_discovered": 1, 00:38:55.994 "num_base_bdevs_operational": 4, 00:38:55.994 "base_bdevs_list": [ 00:38:55.994 { 00:38:55.994 "name": "pt1", 00:38:55.994 "uuid": "08413443-1d7b-52ef-b635-f86e0c05c15e", 00:38:55.994 "is_configured": true, 00:38:55.994 "data_offset": 2048, 00:38:55.994 "data_size": 63488 00:38:55.994 }, 00:38:55.994 { 00:38:55.994 "name": null, 00:38:55.994 "uuid": "de1dbcec-2833-5482-875d-96f7725c0119", 00:38:55.994 "is_configured": false, 00:38:55.994 "data_offset": 2048, 00:38:55.994 "data_size": 63488 00:38:55.994 }, 00:38:55.994 { 00:38:55.994 "name": null, 00:38:55.994 "uuid": "90eada2e-4328-59d8-869b-0eb1a696be8a", 00:38:55.994 "is_configured": false, 00:38:55.994 "data_offset": 2048, 00:38:55.994 "data_size": 63488 00:38:55.994 }, 00:38:55.994 { 00:38:55.994 "name": null, 00:38:55.994 "uuid": "61c7c31b-3dc5-5dec-8f5d-6c77e9fde5f4", 00:38:55.994 "is_configured": false, 00:38:55.994 "data_offset": 2048, 00:38:55.994 "data_size": 63488 00:38:55.994 } 00:38:55.994 ] 00:38:55.994 }' 00:38:55.994 16:16:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:55.994 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:38:56.252 16:16:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:38:56.252 16:16:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:56.510 [2024-07-22 16:16:00.681341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:56.510 [2024-07-22 16:16:00.681493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:56.510 [2024-07-22 16:16:00.681577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:38:56.510 [2024-07-22 16:16:00.681610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:56.510 [2024-07-22 16:16:00.682669] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:56.510 [2024-07-22 16:16:00.682735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:56.510 [2024-07-22 16:16:00.682915] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:38:56.510 [2024-07-22 16:16:00.682976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:56.510 pt2 00:38:56.510 16:16:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:56.768 [2024-07-22 16:16:00.941336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:56.769 16:16:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.027 16:16:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:57.027 "name": "raid_bdev1", 00:38:57.027 "uuid": "1117fb41-06cb-4291-99b0-6e0642f2b10a", 00:38:57.027 "strip_size_kb": 64, 00:38:57.027 "state": "configuring", 00:38:57.027 "raid_level": "concat", 00:38:57.027 "superblock": true, 00:38:57.027 "num_base_bdevs": 4, 00:38:57.027 "num_base_bdevs_discovered": 1, 00:38:57.027 "num_base_bdevs_operational": 4, 00:38:57.027 "base_bdevs_list": [ 00:38:57.027 { 00:38:57.027 "name": "pt1", 00:38:57.027 "uuid": "08413443-1d7b-52ef-b635-f86e0c05c15e", 00:38:57.027 "is_configured": true, 00:38:57.027 "data_offset": 2048, 00:38:57.027 "data_size": 63488 00:38:57.027 }, 00:38:57.027 { 00:38:57.027 "name": null, 00:38:57.027 "uuid": "de1dbcec-2833-5482-875d-96f7725c0119", 00:38:57.027 "is_configured": false, 00:38:57.027 "data_offset": 2048, 00:38:57.027 "data_size": 63488 00:38:57.027 }, 00:38:57.027 { 00:38:57.027 "name": null, 00:38:57.027 "uuid": "90eada2e-4328-59d8-869b-0eb1a696be8a", 00:38:57.027 "is_configured": false, 00:38:57.027 "data_offset": 2048, 00:38:57.027 "data_size": 63488 00:38:57.027 }, 00:38:57.027 { 00:38:57.027 "name": null, 00:38:57.027 "uuid": "61c7c31b-3dc5-5dec-8f5d-6c77e9fde5f4", 00:38:57.027 "is_configured": false, 00:38:57.027 "data_offset": 2048, 00:38:57.027 "data_size": 63488 00:38:57.027 } 00:38:57.027 ] 00:38:57.027 }' 00:38:57.027 16:16:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:57.027 16:16:01 -- common/autotest_common.sh@10 -- # set +x 00:38:57.286 16:16:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:38:57.286 16:16:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:57.286 16:16:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:57.544 [2024-07-22 16:16:01.769493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:57.544 [2024-07-22 16:16:01.769618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:57.544 [2024-07-22 16:16:01.769654] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:38:57.544 [2024-07-22 16:16:01.769675] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:57.544 [2024-07-22 16:16:01.770266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:57.544 [2024-07-22 16:16:01.770299] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:57.544 [2024-07-22 16:16:01.770412] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:38:57.544 [2024-07-22 16:16:01.770451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:57.544 pt2 00:38:57.544 16:16:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:57.544 16:16:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:57.544 16:16:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:38:57.803 [2024-07-22 16:16:02.037621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:38:57.803 [2024-07-22 16:16:02.037736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:57.803 [2024-07-22 16:16:02.037772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:38:57.803 [2024-07-22 16:16:02.037792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:57.803 [2024-07-22 16:16:02.038372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:57.803 [2024-07-22 16:16:02.038415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:38:57.803 [2024-07-22 16:16:02.038528] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:38:57.803 [2024-07-22 16:16:02.038577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:38:57.803 pt3 00:38:57.803 16:16:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:57.803 16:16:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:57.803 16:16:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:38:58.064 [2024-07-22 16:16:02.309351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:38:58.064 [2024-07-22 16:16:02.309680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:58.064 [2024-07-22 16:16:02.309728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:38:58.064 [2024-07-22 16:16:02.309750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:58.064 [2024-07-22 16:16:02.310340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:58.064 [2024-07-22 16:16:02.310378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:38:58.064 [2024-07-22 16:16:02.310508] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:38:58.064 [2024-07-22 16:16:02.310552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:38:58.064 [2024-07-22 16:16:02.310790] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:38:58.064 [2024-07-22 16:16:02.310812] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:38:58.064 [2024-07-22 16:16:02.310953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:38:58.064 [2024-07-22 16:16:02.311406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:38:58.064 [2024-07-22 16:16:02.311425] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:38:58.064 [2024-07-22 16:16:02.311582] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:58.064 pt4 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:38:58.064 16:16:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:38:58.337 16:16:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:58.337 16:16:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.337 16:16:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:38:58.337 "name": "raid_bdev1", 00:38:58.337 "uuid": "1117fb41-06cb-4291-99b0-6e0642f2b10a", 00:38:58.337 "strip_size_kb": 64, 00:38:58.337 "state": "online", 00:38:58.337 "raid_level": "concat", 00:38:58.337 "superblock": true, 00:38:58.337 "num_base_bdevs": 4, 00:38:58.337 "num_base_bdevs_discovered": 4, 00:38:58.337 "num_base_bdevs_operational": 4, 00:38:58.337 "base_bdevs_list": [ 00:38:58.337 { 00:38:58.337 "name": "pt1", 00:38:58.337 "uuid": "08413443-1d7b-52ef-b635-f86e0c05c15e", 00:38:58.337 "is_configured": true, 00:38:58.337 "data_offset": 2048, 00:38:58.337 "data_size": 63488 00:38:58.337 }, 00:38:58.337 { 00:38:58.337 "name": "pt2", 00:38:58.337 "uuid": "de1dbcec-2833-5482-875d-96f7725c0119", 00:38:58.337 "is_configured": true, 00:38:58.337 "data_offset": 2048, 00:38:58.337 "data_size": 63488 00:38:58.337 }, 00:38:58.337 { 00:38:58.337 "name": "pt3", 00:38:58.337 "uuid": "90eada2e-4328-59d8-869b-0eb1a696be8a", 00:38:58.337 "is_configured": true, 00:38:58.337 "data_offset": 2048, 00:38:58.337 "data_size": 63488 00:38:58.337 }, 00:38:58.337 { 00:38:58.337 "name": "pt4", 00:38:58.337 "uuid": "61c7c31b-3dc5-5dec-8f5d-6c77e9fde5f4", 00:38:58.337 "is_configured": true, 00:38:58.337 "data_offset": 2048, 00:38:58.337 "data_size": 63488 00:38:58.337 } 00:38:58.337 ] 00:38:58.337 }' 00:38:58.337 16:16:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:38:58.596 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:38:58.866 16:16:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:58.866 16:16:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:38:59.132 [2024-07-22 16:16:03.218005] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:59.132 16:16:03 -- bdev/bdev_raid.sh@430 -- # '[' 1117fb41-06cb-4291-99b0-6e0642f2b10a '!=' 1117fb41-06cb-4291-99b0-6e0642f2b10a ']' 00:38:59.132 16:16:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:38:59.132 16:16:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:38:59.132 16:16:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:38:59.132 16:16:03 -- bdev/bdev_raid.sh@511 -- # killprocess 77692 00:38:59.132 16:16:03 -- common/autotest_common.sh@926 -- # '[' -z 77692 ']' 00:38:59.132 16:16:03 -- common/autotest_common.sh@930 -- # kill -0 77692 00:38:59.132 16:16:03 -- common/autotest_common.sh@931 -- # uname 00:38:59.132 16:16:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:38:59.132 16:16:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77692 00:38:59.132 killing process with pid 77692 00:38:59.132 16:16:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:38:59.132 16:16:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:38:59.132 16:16:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77692' 00:38:59.132 16:16:03 -- common/autotest_common.sh@945 -- # kill 77692 00:38:59.132 16:16:03 -- common/autotest_common.sh@950 -- # wait 77692 00:38:59.132 [2024-07-22 16:16:03.274948] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:59.132 [2024-07-22 16:16:03.275138] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:59.132 [2024-07-22 16:16:03.275261] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:59.132 [2024-07-22 16:16:03.275282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:38:59.699 [2024-07-22 16:16:03.672139] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:01.075 ************************************ 00:39:01.075 END TEST raid_superblock_test 00:39:01.075 ************************************ 00:39:01.075 16:16:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:39:01.075 00:39:01.075 real 0m12.062s 00:39:01.075 user 0m19.604s 00:39:01.075 sys 0m1.973s 00:39:01.075 16:16:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:01.075 16:16:04 -- common/autotest_common.sh@10 -- # set +x 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:39:01.075 16:16:05 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:39:01.075 16:16:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:01.075 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:39:01.075 ************************************ 00:39:01.075 START TEST raid_state_function_test 00:39:01.075 ************************************ 00:39:01.075 16:16:05 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:39:01.075 Process raid pid: 78001 00:39:01.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=78001 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 78001' 00:39:01.075 16:16:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 78001 /var/tmp/spdk-raid.sock 00:39:01.075 16:16:05 -- common/autotest_common.sh@819 -- # '[' -z 78001 ']' 00:39:01.076 16:16:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:39:01.076 16:16:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:01.076 16:16:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:01.076 16:16:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:01.076 16:16:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:01.076 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:39:01.076 [2024-07-22 16:16:05.123651] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:39:01.076 [2024-07-22 16:16:05.123852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.076 [2024-07-22 16:16:05.315321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.643 [2024-07-22 16:16:05.624751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.643 [2024-07-22 16:16:05.863596] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:01.902 16:16:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:01.902 16:16:06 -- common/autotest_common.sh@852 -- # return 0 00:39:01.902 16:16:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:02.179 [2024-07-22 16:16:06.339957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:02.179 [2024-07-22 16:16:06.340072] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:02.179 [2024-07-22 16:16:06.340091] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:02.179 [2024-07-22 16:16:06.340110] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:02.179 [2024-07-22 16:16:06.340120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:02.179 [2024-07-22 16:16:06.340136] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:02.179 [2024-07-22 16:16:06.340146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:02.179 [2024-07-22 16:16:06.340161] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.179 16:16:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:02.436 16:16:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:02.436 "name": "Existed_Raid", 00:39:02.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.436 "strip_size_kb": 0, 00:39:02.436 "state": "configuring", 00:39:02.436 "raid_level": "raid1", 00:39:02.436 "superblock": false, 00:39:02.436 "num_base_bdevs": 4, 00:39:02.436 "num_base_bdevs_discovered": 0, 00:39:02.436 "num_base_bdevs_operational": 4, 00:39:02.436 "base_bdevs_list": [ 00:39:02.436 { 00:39:02.436 "name": "BaseBdev1", 00:39:02.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.436 "is_configured": false, 00:39:02.436 "data_offset": 0, 00:39:02.436 "data_size": 0 00:39:02.436 }, 00:39:02.436 { 00:39:02.436 "name": "BaseBdev2", 00:39:02.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.436 "is_configured": false, 00:39:02.436 "data_offset": 0, 00:39:02.436 "data_size": 0 00:39:02.436 }, 00:39:02.436 { 00:39:02.436 "name": "BaseBdev3", 00:39:02.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.436 "is_configured": false, 00:39:02.436 "data_offset": 0, 00:39:02.436 "data_size": 0 00:39:02.436 }, 00:39:02.436 { 00:39:02.436 "name": "BaseBdev4", 00:39:02.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.436 "is_configured": false, 00:39:02.436 "data_offset": 0, 00:39:02.436 "data_size": 0 00:39:02.436 } 00:39:02.436 ] 00:39:02.436 }' 00:39:02.436 16:16:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:02.436 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:39:02.694 16:16:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:02.954 [2024-07-22 16:16:07.204034] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:02.954 [2024-07-22 16:16:07.204102] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:39:02.954 16:16:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:03.521 [2024-07-22 16:16:07.536255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:03.521 [2024-07-22 16:16:07.536597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:03.521 [2024-07-22 16:16:07.536625] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:03.521 [2024-07-22 16:16:07.536644] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:03.521 [2024-07-22 16:16:07.536655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:03.521 [2024-07-22 16:16:07.536670] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:03.521 [2024-07-22 16:16:07.536680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:03.521 [2024-07-22 16:16:07.536696] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:03.521 16:16:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:39:03.779 [2024-07-22 16:16:07.872554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:03.779 BaseBdev1 00:39:03.779 16:16:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:39:03.779 16:16:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:39:03.779 16:16:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:03.779 16:16:07 -- common/autotest_common.sh@889 -- # local i 00:39:03.779 16:16:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:03.779 16:16:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:03.779 16:16:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:04.038 16:16:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:04.314 [ 00:39:04.314 { 00:39:04.314 "name": "BaseBdev1", 00:39:04.314 "aliases": [ 00:39:04.314 "7976b981-587f-457e-9bdd-1b1008b045d6" 00:39:04.314 ], 00:39:04.314 "product_name": "Malloc disk", 00:39:04.314 "block_size": 512, 00:39:04.314 "num_blocks": 65536, 00:39:04.314 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:04.314 "assigned_rate_limits": { 00:39:04.314 "rw_ios_per_sec": 0, 00:39:04.314 "rw_mbytes_per_sec": 0, 00:39:04.314 "r_mbytes_per_sec": 0, 00:39:04.314 "w_mbytes_per_sec": 0 00:39:04.314 }, 00:39:04.314 "claimed": true, 00:39:04.314 "claim_type": "exclusive_write", 00:39:04.314 "zoned": false, 00:39:04.314 "supported_io_types": { 00:39:04.314 "read": true, 00:39:04.314 "write": true, 00:39:04.314 "unmap": true, 00:39:04.314 "write_zeroes": true, 00:39:04.314 "flush": true, 00:39:04.314 "reset": true, 00:39:04.314 "compare": false, 00:39:04.314 "compare_and_write": false, 00:39:04.314 "abort": true, 00:39:04.314 "nvme_admin": false, 00:39:04.314 "nvme_io": false 00:39:04.314 }, 00:39:04.314 "memory_domains": [ 00:39:04.314 { 00:39:04.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:04.314 "dma_device_type": 2 00:39:04.314 } 00:39:04.314 ], 00:39:04.314 "driver_specific": {} 00:39:04.314 } 00:39:04.314 ] 00:39:04.314 16:16:08 -- common/autotest_common.sh@895 -- # return 0 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:04.314 16:16:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:04.572 16:16:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:04.572 "name": "Existed_Raid", 00:39:04.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.572 "strip_size_kb": 0, 00:39:04.572 "state": "configuring", 00:39:04.572 "raid_level": "raid1", 00:39:04.572 "superblock": false, 00:39:04.572 "num_base_bdevs": 4, 00:39:04.572 "num_base_bdevs_discovered": 1, 00:39:04.572 "num_base_bdevs_operational": 4, 00:39:04.572 "base_bdevs_list": [ 00:39:04.572 { 00:39:04.572 "name": "BaseBdev1", 00:39:04.572 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:04.572 "is_configured": true, 00:39:04.572 "data_offset": 0, 00:39:04.572 "data_size": 65536 00:39:04.572 }, 00:39:04.572 { 00:39:04.572 "name": "BaseBdev2", 00:39:04.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.572 "is_configured": false, 00:39:04.572 "data_offset": 0, 00:39:04.572 "data_size": 0 00:39:04.572 }, 00:39:04.572 { 00:39:04.572 "name": "BaseBdev3", 00:39:04.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.572 "is_configured": false, 00:39:04.572 "data_offset": 0, 00:39:04.572 "data_size": 0 00:39:04.572 }, 00:39:04.572 { 00:39:04.572 "name": "BaseBdev4", 00:39:04.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.572 "is_configured": false, 00:39:04.572 "data_offset": 0, 00:39:04.572 "data_size": 0 00:39:04.572 } 00:39:04.572 ] 00:39:04.572 }' 00:39:04.572 16:16:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:04.572 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:39:04.831 16:16:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:05.089 [2024-07-22 16:16:09.249406] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:05.089 [2024-07-22 16:16:09.249497] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:39:05.089 16:16:09 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:39:05.089 16:16:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:05.347 [2024-07-22 16:16:09.533524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:05.347 [2024-07-22 16:16:09.536423] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:05.347 [2024-07-22 16:16:09.536843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:05.347 [2024-07-22 16:16:09.536900] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:05.347 [2024-07-22 16:16:09.536920] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:05.347 [2024-07-22 16:16:09.536930] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:05.347 [2024-07-22 16:16:09.536966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:05.347 16:16:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:05.605 16:16:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:05.605 "name": "Existed_Raid", 00:39:05.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.605 "strip_size_kb": 0, 00:39:05.605 "state": "configuring", 00:39:05.605 "raid_level": "raid1", 00:39:05.605 "superblock": false, 00:39:05.605 "num_base_bdevs": 4, 00:39:05.605 "num_base_bdevs_discovered": 1, 00:39:05.605 "num_base_bdevs_operational": 4, 00:39:05.605 "base_bdevs_list": [ 00:39:05.605 { 00:39:05.605 "name": "BaseBdev1", 00:39:05.605 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:05.605 "is_configured": true, 00:39:05.605 "data_offset": 0, 00:39:05.605 "data_size": 65536 00:39:05.605 }, 00:39:05.605 { 00:39:05.605 "name": "BaseBdev2", 00:39:05.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.605 "is_configured": false, 00:39:05.605 "data_offset": 0, 00:39:05.605 "data_size": 0 00:39:05.605 }, 00:39:05.605 { 00:39:05.605 "name": "BaseBdev3", 00:39:05.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.605 "is_configured": false, 00:39:05.605 "data_offset": 0, 00:39:05.605 "data_size": 0 00:39:05.605 }, 00:39:05.605 { 00:39:05.605 "name": "BaseBdev4", 00:39:05.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.605 "is_configured": false, 00:39:05.605 "data_offset": 0, 00:39:05.605 "data_size": 0 00:39:05.605 } 00:39:05.605 ] 00:39:05.605 }' 00:39:05.605 16:16:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:05.605 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:39:05.864 16:16:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:39:06.122 [2024-07-22 16:16:10.382788] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:06.122 BaseBdev2 00:39:06.379 16:16:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:39:06.379 16:16:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:39:06.379 16:16:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:06.379 16:16:10 -- common/autotest_common.sh@889 -- # local i 00:39:06.379 16:16:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:06.379 16:16:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:06.379 16:16:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:06.638 16:16:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:06.896 [ 00:39:06.896 { 00:39:06.896 "name": "BaseBdev2", 00:39:06.896 "aliases": [ 00:39:06.896 "0802cb1b-dc6f-4ab5-937b-e851895de6c4" 00:39:06.896 ], 00:39:06.896 "product_name": "Malloc disk", 00:39:06.896 "block_size": 512, 00:39:06.896 "num_blocks": 65536, 00:39:06.896 "uuid": "0802cb1b-dc6f-4ab5-937b-e851895de6c4", 00:39:06.896 "assigned_rate_limits": { 00:39:06.896 "rw_ios_per_sec": 0, 00:39:06.896 "rw_mbytes_per_sec": 0, 00:39:06.896 "r_mbytes_per_sec": 0, 00:39:06.896 "w_mbytes_per_sec": 0 00:39:06.896 }, 00:39:06.896 "claimed": true, 00:39:06.896 "claim_type": "exclusive_write", 00:39:06.896 "zoned": false, 00:39:06.896 "supported_io_types": { 00:39:06.896 "read": true, 00:39:06.896 "write": true, 00:39:06.896 "unmap": true, 00:39:06.896 "write_zeroes": true, 00:39:06.896 "flush": true, 00:39:06.896 "reset": true, 00:39:06.896 "compare": false, 00:39:06.896 "compare_and_write": false, 00:39:06.896 "abort": true, 00:39:06.896 "nvme_admin": false, 00:39:06.896 "nvme_io": false 00:39:06.896 }, 00:39:06.896 "memory_domains": [ 00:39:06.896 { 00:39:06.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:06.896 "dma_device_type": 2 00:39:06.896 } 00:39:06.896 ], 00:39:06.896 "driver_specific": {} 00:39:06.896 } 00:39:06.896 ] 00:39:06.896 16:16:11 -- common/autotest_common.sh@895 -- # return 0 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:06.896 16:16:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:07.154 16:16:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:07.154 "name": "Existed_Raid", 00:39:07.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:07.154 "strip_size_kb": 0, 00:39:07.154 "state": "configuring", 00:39:07.154 "raid_level": "raid1", 00:39:07.154 "superblock": false, 00:39:07.154 "num_base_bdevs": 4, 00:39:07.154 "num_base_bdevs_discovered": 2, 00:39:07.155 "num_base_bdevs_operational": 4, 00:39:07.155 "base_bdevs_list": [ 00:39:07.155 { 00:39:07.155 "name": "BaseBdev1", 00:39:07.155 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:07.155 "is_configured": true, 00:39:07.155 "data_offset": 0, 00:39:07.155 "data_size": 65536 00:39:07.155 }, 00:39:07.155 { 00:39:07.155 "name": "BaseBdev2", 00:39:07.155 "uuid": "0802cb1b-dc6f-4ab5-937b-e851895de6c4", 00:39:07.155 "is_configured": true, 00:39:07.155 "data_offset": 0, 00:39:07.155 "data_size": 65536 00:39:07.155 }, 00:39:07.155 { 00:39:07.155 "name": "BaseBdev3", 00:39:07.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:07.155 "is_configured": false, 00:39:07.155 "data_offset": 0, 00:39:07.155 "data_size": 0 00:39:07.155 }, 00:39:07.155 { 00:39:07.155 "name": "BaseBdev4", 00:39:07.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:07.155 "is_configured": false, 00:39:07.155 "data_offset": 0, 00:39:07.155 "data_size": 0 00:39:07.155 } 00:39:07.155 ] 00:39:07.155 }' 00:39:07.155 16:16:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:07.155 16:16:11 -- common/autotest_common.sh@10 -- # set +x 00:39:07.412 16:16:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:39:07.670 [2024-07-22 16:16:11.915754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:07.670 BaseBdev3 00:39:07.670 16:16:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:39:07.670 16:16:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:39:07.670 16:16:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:07.670 16:16:11 -- common/autotest_common.sh@889 -- # local i 00:39:07.670 16:16:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:07.670 16:16:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:07.670 16:16:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:07.928 16:16:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:08.187 [ 00:39:08.187 { 00:39:08.187 "name": "BaseBdev3", 00:39:08.187 "aliases": [ 00:39:08.187 "8b2f17e4-4e63-44b6-af18-f0efcb80b387" 00:39:08.187 ], 00:39:08.187 "product_name": "Malloc disk", 00:39:08.187 "block_size": 512, 00:39:08.187 "num_blocks": 65536, 00:39:08.187 "uuid": "8b2f17e4-4e63-44b6-af18-f0efcb80b387", 00:39:08.187 "assigned_rate_limits": { 00:39:08.187 "rw_ios_per_sec": 0, 00:39:08.187 "rw_mbytes_per_sec": 0, 00:39:08.187 "r_mbytes_per_sec": 0, 00:39:08.187 "w_mbytes_per_sec": 0 00:39:08.187 }, 00:39:08.187 "claimed": true, 00:39:08.187 "claim_type": "exclusive_write", 00:39:08.187 "zoned": false, 00:39:08.187 "supported_io_types": { 00:39:08.187 "read": true, 00:39:08.187 "write": true, 00:39:08.187 "unmap": true, 00:39:08.187 "write_zeroes": true, 00:39:08.187 "flush": true, 00:39:08.187 "reset": true, 00:39:08.187 "compare": false, 00:39:08.187 "compare_and_write": false, 00:39:08.187 "abort": true, 00:39:08.187 "nvme_admin": false, 00:39:08.187 "nvme_io": false 00:39:08.187 }, 00:39:08.187 "memory_domains": [ 00:39:08.187 { 00:39:08.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.187 "dma_device_type": 2 00:39:08.187 } 00:39:08.187 ], 00:39:08.187 "driver_specific": {} 00:39:08.187 } 00:39:08.187 ] 00:39:08.187 16:16:12 -- common/autotest_common.sh@895 -- # return 0 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:08.187 16:16:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:08.446 16:16:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:08.446 "name": "Existed_Raid", 00:39:08.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:08.446 "strip_size_kb": 0, 00:39:08.446 "state": "configuring", 00:39:08.446 "raid_level": "raid1", 00:39:08.446 "superblock": false, 00:39:08.446 "num_base_bdevs": 4, 00:39:08.446 "num_base_bdevs_discovered": 3, 00:39:08.447 "num_base_bdevs_operational": 4, 00:39:08.447 "base_bdevs_list": [ 00:39:08.447 { 00:39:08.447 "name": "BaseBdev1", 00:39:08.447 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:08.447 "is_configured": true, 00:39:08.447 "data_offset": 0, 00:39:08.447 "data_size": 65536 00:39:08.447 }, 00:39:08.447 { 00:39:08.447 "name": "BaseBdev2", 00:39:08.447 "uuid": "0802cb1b-dc6f-4ab5-937b-e851895de6c4", 00:39:08.447 "is_configured": true, 00:39:08.447 "data_offset": 0, 00:39:08.447 "data_size": 65536 00:39:08.447 }, 00:39:08.447 { 00:39:08.447 "name": "BaseBdev3", 00:39:08.447 "uuid": "8b2f17e4-4e63-44b6-af18-f0efcb80b387", 00:39:08.447 "is_configured": true, 00:39:08.447 "data_offset": 0, 00:39:08.447 "data_size": 65536 00:39:08.447 }, 00:39:08.447 { 00:39:08.447 "name": "BaseBdev4", 00:39:08.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:08.447 "is_configured": false, 00:39:08.447 "data_offset": 0, 00:39:08.447 "data_size": 0 00:39:08.447 } 00:39:08.447 ] 00:39:08.447 }' 00:39:08.447 16:16:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:08.447 16:16:12 -- common/autotest_common.sh@10 -- # set +x 00:39:09.012 16:16:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:39:09.270 [2024-07-22 16:16:13.346975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:09.270 [2024-07-22 16:16:13.347069] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:39:09.270 [2024-07-22 16:16:13.347086] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:09.270 [2024-07-22 16:16:13.347222] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:39:09.270 [2024-07-22 16:16:13.347654] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:39:09.270 [2024-07-22 16:16:13.347679] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:39:09.270 [2024-07-22 16:16:13.347962] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:09.270 BaseBdev4 00:39:09.270 16:16:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:39:09.270 16:16:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:39:09.270 16:16:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:09.270 16:16:13 -- common/autotest_common.sh@889 -- # local i 00:39:09.270 16:16:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:09.270 16:16:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:09.270 16:16:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:09.529 16:16:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:09.787 [ 00:39:09.787 { 00:39:09.787 "name": "BaseBdev4", 00:39:09.787 "aliases": [ 00:39:09.787 "f0b9d982-86f4-427e-bd90-6cfd64f89434" 00:39:09.787 ], 00:39:09.787 "product_name": "Malloc disk", 00:39:09.787 "block_size": 512, 00:39:09.788 "num_blocks": 65536, 00:39:09.788 "uuid": "f0b9d982-86f4-427e-bd90-6cfd64f89434", 00:39:09.788 "assigned_rate_limits": { 00:39:09.788 "rw_ios_per_sec": 0, 00:39:09.788 "rw_mbytes_per_sec": 0, 00:39:09.788 "r_mbytes_per_sec": 0, 00:39:09.788 "w_mbytes_per_sec": 0 00:39:09.788 }, 00:39:09.788 "claimed": true, 00:39:09.788 "claim_type": "exclusive_write", 00:39:09.788 "zoned": false, 00:39:09.788 "supported_io_types": { 00:39:09.788 "read": true, 00:39:09.788 "write": true, 00:39:09.788 "unmap": true, 00:39:09.788 "write_zeroes": true, 00:39:09.788 "flush": true, 00:39:09.788 "reset": true, 00:39:09.788 "compare": false, 00:39:09.788 "compare_and_write": false, 00:39:09.788 "abort": true, 00:39:09.788 "nvme_admin": false, 00:39:09.788 "nvme_io": false 00:39:09.788 }, 00:39:09.788 "memory_domains": [ 00:39:09.788 { 00:39:09.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:09.788 "dma_device_type": 2 00:39:09.788 } 00:39:09.788 ], 00:39:09.788 "driver_specific": {} 00:39:09.788 } 00:39:09.788 ] 00:39:09.788 16:16:13 -- common/autotest_common.sh@895 -- # return 0 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:09.788 16:16:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:10.046 16:16:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:10.046 "name": "Existed_Raid", 00:39:10.046 "uuid": "44a39363-f248-49f6-868a-634b1c23d1ee", 00:39:10.046 "strip_size_kb": 0, 00:39:10.046 "state": "online", 00:39:10.046 "raid_level": "raid1", 00:39:10.046 "superblock": false, 00:39:10.046 "num_base_bdevs": 4, 00:39:10.046 "num_base_bdevs_discovered": 4, 00:39:10.046 "num_base_bdevs_operational": 4, 00:39:10.046 "base_bdevs_list": [ 00:39:10.046 { 00:39:10.046 "name": "BaseBdev1", 00:39:10.046 "uuid": "7976b981-587f-457e-9bdd-1b1008b045d6", 00:39:10.046 "is_configured": true, 00:39:10.046 "data_offset": 0, 00:39:10.046 "data_size": 65536 00:39:10.046 }, 00:39:10.046 { 00:39:10.046 "name": "BaseBdev2", 00:39:10.046 "uuid": "0802cb1b-dc6f-4ab5-937b-e851895de6c4", 00:39:10.046 "is_configured": true, 00:39:10.046 "data_offset": 0, 00:39:10.046 "data_size": 65536 00:39:10.046 }, 00:39:10.046 { 00:39:10.046 "name": "BaseBdev3", 00:39:10.046 "uuid": "8b2f17e4-4e63-44b6-af18-f0efcb80b387", 00:39:10.046 "is_configured": true, 00:39:10.046 "data_offset": 0, 00:39:10.046 "data_size": 65536 00:39:10.046 }, 00:39:10.046 { 00:39:10.046 "name": "BaseBdev4", 00:39:10.046 "uuid": "f0b9d982-86f4-427e-bd90-6cfd64f89434", 00:39:10.046 "is_configured": true, 00:39:10.046 "data_offset": 0, 00:39:10.046 "data_size": 65536 00:39:10.046 } 00:39:10.046 ] 00:39:10.046 }' 00:39:10.046 16:16:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:10.046 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:39:10.302 16:16:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:10.560 [2024-07-22 16:16:14.684233] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:10.560 16:16:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:10.819 16:16:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:10.819 "name": "Existed_Raid", 00:39:10.819 "uuid": "44a39363-f248-49f6-868a-634b1c23d1ee", 00:39:10.819 "strip_size_kb": 0, 00:39:10.819 "state": "online", 00:39:10.819 "raid_level": "raid1", 00:39:10.819 "superblock": false, 00:39:10.819 "num_base_bdevs": 4, 00:39:10.819 "num_base_bdevs_discovered": 3, 00:39:10.819 "num_base_bdevs_operational": 3, 00:39:10.819 "base_bdevs_list": [ 00:39:10.819 { 00:39:10.819 "name": null, 00:39:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:10.819 "is_configured": false, 00:39:10.819 "data_offset": 0, 00:39:10.819 "data_size": 65536 00:39:10.819 }, 00:39:10.819 { 00:39:10.819 "name": "BaseBdev2", 00:39:10.819 "uuid": "0802cb1b-dc6f-4ab5-937b-e851895de6c4", 00:39:10.819 "is_configured": true, 00:39:10.819 "data_offset": 0, 00:39:10.819 "data_size": 65536 00:39:10.819 }, 00:39:10.819 { 00:39:10.819 "name": "BaseBdev3", 00:39:10.819 "uuid": "8b2f17e4-4e63-44b6-af18-f0efcb80b387", 00:39:10.819 "is_configured": true, 00:39:10.819 "data_offset": 0, 00:39:10.819 "data_size": 65536 00:39:10.819 }, 00:39:10.819 { 00:39:10.819 "name": "BaseBdev4", 00:39:10.819 "uuid": "f0b9d982-86f4-427e-bd90-6cfd64f89434", 00:39:10.819 "is_configured": true, 00:39:10.819 "data_offset": 0, 00:39:10.819 "data_size": 65536 00:39:10.819 } 00:39:10.819 ] 00:39:10.819 }' 00:39:10.819 16:16:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:10.819 16:16:15 -- common/autotest_common.sh@10 -- # set +x 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:11.419 16:16:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:39:11.677 [2024-07-22 16:16:15.931924] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:11.934 16:16:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:11.934 16:16:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:11.934 16:16:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:11.934 16:16:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:12.192 16:16:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:12.192 16:16:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:12.193 16:16:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:39:12.451 [2024-07-22 16:16:16.591657] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:12.451 16:16:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:12.451 16:16:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:12.451 16:16:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:12.451 16:16:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:13.017 16:16:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:13.017 16:16:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:13.017 16:16:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:39:13.017 [2024-07-22 16:16:17.241816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:39:13.017 [2024-07-22 16:16:17.241866] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:13.017 [2024-07-22 16:16:17.241945] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:13.276 [2024-07-22 16:16:17.342855] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:13.276 [2024-07-22 16:16:17.342914] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:39:13.276 16:16:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:13.276 16:16:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:13.276 16:16:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:13.276 16:16:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:39:13.535 16:16:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:39:13.535 16:16:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:39:13.535 16:16:17 -- bdev/bdev_raid.sh@287 -- # killprocess 78001 00:39:13.535 16:16:17 -- common/autotest_common.sh@926 -- # '[' -z 78001 ']' 00:39:13.535 16:16:17 -- common/autotest_common.sh@930 -- # kill -0 78001 00:39:13.535 16:16:17 -- common/autotest_common.sh@931 -- # uname 00:39:13.535 16:16:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:13.535 16:16:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78001 00:39:13.535 killing process with pid 78001 00:39:13.535 16:16:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:13.535 16:16:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:13.535 16:16:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78001' 00:39:13.535 16:16:17 -- common/autotest_common.sh@945 -- # kill 78001 00:39:13.535 16:16:17 -- common/autotest_common.sh@950 -- # wait 78001 00:39:13.535 [2024-07-22 16:16:17.699680] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:13.535 [2024-07-22 16:16:17.699808] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:14.931 16:16:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:39:14.931 00:39:14.931 real 0m14.150s 00:39:14.931 user 0m23.282s 00:39:14.931 sys 0m2.343s 00:39:14.931 16:16:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:14.931 ************************************ 00:39:14.931 END TEST raid_state_function_test 00:39:14.931 ************************************ 00:39:14.931 16:16:19 -- common/autotest_common.sh@10 -- # set +x 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:39:15.189 16:16:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:39:15.189 16:16:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:15.189 16:16:19 -- common/autotest_common.sh@10 -- # set +x 00:39:15.189 ************************************ 00:39:15.189 START TEST raid_state_function_test_sb 00:39:15.189 ************************************ 00:39:15.189 16:16:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=78413 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:39:15.189 Process raid pid: 78413 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 78413' 00:39:15.189 16:16:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 78413 /var/tmp/spdk-raid.sock 00:39:15.189 16:16:19 -- common/autotest_common.sh@819 -- # '[' -z 78413 ']' 00:39:15.189 16:16:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:15.189 16:16:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:15.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:15.189 16:16:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:15.189 16:16:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:15.189 16:16:19 -- common/autotest_common.sh@10 -- # set +x 00:39:15.189 [2024-07-22 16:16:19.323466] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:39:15.189 [2024-07-22 16:16:19.323659] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.448 [2024-07-22 16:16:19.506740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.706 [2024-07-22 16:16:19.818280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:15.964 [2024-07-22 16:16:20.058432] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:16.238 16:16:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:16.238 16:16:20 -- common/autotest_common.sh@852 -- # return 0 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:16.238 [2024-07-22 16:16:20.480620] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:16.238 [2024-07-22 16:16:20.480693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:16.238 [2024-07-22 16:16:20.480709] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:16.238 [2024-07-22 16:16:20.480725] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:16.238 [2024-07-22 16:16:20.480734] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:16.238 [2024-07-22 16:16:20.480750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:16.238 [2024-07-22 16:16:20.480759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:16.238 [2024-07-22 16:16:20.480773] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:16.238 16:16:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:16.804 16:16:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:16.804 "name": "Existed_Raid", 00:39:16.804 "uuid": "a06fc236-103e-4c85-b040-d1a335721017", 00:39:16.804 "strip_size_kb": 0, 00:39:16.804 "state": "configuring", 00:39:16.804 "raid_level": "raid1", 00:39:16.804 "superblock": true, 00:39:16.804 "num_base_bdevs": 4, 00:39:16.804 "num_base_bdevs_discovered": 0, 00:39:16.805 "num_base_bdevs_operational": 4, 00:39:16.805 "base_bdevs_list": [ 00:39:16.805 { 00:39:16.805 "name": "BaseBdev1", 00:39:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.805 "is_configured": false, 00:39:16.805 "data_offset": 0, 00:39:16.805 "data_size": 0 00:39:16.805 }, 00:39:16.805 { 00:39:16.805 "name": "BaseBdev2", 00:39:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.805 "is_configured": false, 00:39:16.805 "data_offset": 0, 00:39:16.805 "data_size": 0 00:39:16.805 }, 00:39:16.805 { 00:39:16.805 "name": "BaseBdev3", 00:39:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.805 "is_configured": false, 00:39:16.805 "data_offset": 0, 00:39:16.805 "data_size": 0 00:39:16.805 }, 00:39:16.805 { 00:39:16.805 "name": "BaseBdev4", 00:39:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:16.805 "is_configured": false, 00:39:16.805 "data_offset": 0, 00:39:16.805 "data_size": 0 00:39:16.805 } 00:39:16.805 ] 00:39:16.805 }' 00:39:16.805 16:16:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:16.805 16:16:20 -- common/autotest_common.sh@10 -- # set +x 00:39:17.062 16:16:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:17.062 [2024-07-22 16:16:21.328934] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:17.062 [2024-07-22 16:16:21.329016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:39:17.321 16:16:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:17.321 [2024-07-22 16:16:21.565033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:17.321 [2024-07-22 16:16:21.565175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:17.321 [2024-07-22 16:16:21.565214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:17.321 [2024-07-22 16:16:21.565230] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:17.321 [2024-07-22 16:16:21.565238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:17.321 [2024-07-22 16:16:21.565252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:17.321 [2024-07-22 16:16:21.565261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:17.321 [2024-07-22 16:16:21.565275] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:17.321 16:16:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:39:17.578 [2024-07-22 16:16:21.840331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:17.578 BaseBdev1 00:39:17.836 16:16:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:39:17.836 16:16:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:39:17.836 16:16:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:17.836 16:16:21 -- common/autotest_common.sh@889 -- # local i 00:39:17.836 16:16:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:17.836 16:16:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:17.836 16:16:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:17.836 16:16:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:18.094 [ 00:39:18.094 { 00:39:18.094 "name": "BaseBdev1", 00:39:18.094 "aliases": [ 00:39:18.094 "472b53e0-734c-485f-b8af-3bd5b21b28ea" 00:39:18.094 ], 00:39:18.094 "product_name": "Malloc disk", 00:39:18.094 "block_size": 512, 00:39:18.094 "num_blocks": 65536, 00:39:18.094 "uuid": "472b53e0-734c-485f-b8af-3bd5b21b28ea", 00:39:18.094 "assigned_rate_limits": { 00:39:18.094 "rw_ios_per_sec": 0, 00:39:18.094 "rw_mbytes_per_sec": 0, 00:39:18.094 "r_mbytes_per_sec": 0, 00:39:18.094 "w_mbytes_per_sec": 0 00:39:18.094 }, 00:39:18.094 "claimed": true, 00:39:18.094 "claim_type": "exclusive_write", 00:39:18.094 "zoned": false, 00:39:18.094 "supported_io_types": { 00:39:18.094 "read": true, 00:39:18.094 "write": true, 00:39:18.094 "unmap": true, 00:39:18.094 "write_zeroes": true, 00:39:18.094 "flush": true, 00:39:18.094 "reset": true, 00:39:18.094 "compare": false, 00:39:18.094 "compare_and_write": false, 00:39:18.094 "abort": true, 00:39:18.094 "nvme_admin": false, 00:39:18.094 "nvme_io": false 00:39:18.094 }, 00:39:18.094 "memory_domains": [ 00:39:18.094 { 00:39:18.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:18.094 "dma_device_type": 2 00:39:18.094 } 00:39:18.094 ], 00:39:18.094 "driver_specific": {} 00:39:18.094 } 00:39:18.094 ] 00:39:18.094 16:16:22 -- common/autotest_common.sh@895 -- # return 0 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:18.094 16:16:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:18.351 16:16:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:18.351 "name": "Existed_Raid", 00:39:18.351 "uuid": "8d1879ab-99cb-4d0e-b271-8d585326bdbe", 00:39:18.351 "strip_size_kb": 0, 00:39:18.351 "state": "configuring", 00:39:18.351 "raid_level": "raid1", 00:39:18.351 "superblock": true, 00:39:18.351 "num_base_bdevs": 4, 00:39:18.351 "num_base_bdevs_discovered": 1, 00:39:18.351 "num_base_bdevs_operational": 4, 00:39:18.351 "base_bdevs_list": [ 00:39:18.351 { 00:39:18.351 "name": "BaseBdev1", 00:39:18.351 "uuid": "472b53e0-734c-485f-b8af-3bd5b21b28ea", 00:39:18.351 "is_configured": true, 00:39:18.351 "data_offset": 2048, 00:39:18.351 "data_size": 63488 00:39:18.351 }, 00:39:18.351 { 00:39:18.351 "name": "BaseBdev2", 00:39:18.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.351 "is_configured": false, 00:39:18.351 "data_offset": 0, 00:39:18.351 "data_size": 0 00:39:18.351 }, 00:39:18.351 { 00:39:18.351 "name": "BaseBdev3", 00:39:18.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.351 "is_configured": false, 00:39:18.351 "data_offset": 0, 00:39:18.351 "data_size": 0 00:39:18.351 }, 00:39:18.351 { 00:39:18.351 "name": "BaseBdev4", 00:39:18.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:18.351 "is_configured": false, 00:39:18.351 "data_offset": 0, 00:39:18.351 "data_size": 0 00:39:18.351 } 00:39:18.351 ] 00:39:18.351 }' 00:39:18.351 16:16:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:18.351 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:39:18.917 16:16:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:39:18.917 [2024-07-22 16:16:23.116878] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:18.917 [2024-07-22 16:16:23.116953] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:39:18.917 16:16:23 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:39:18.917 16:16:23 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:19.483 16:16:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:39:19.746 BaseBdev1 00:39:19.746 16:16:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:39:19.746 16:16:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:39:19.746 16:16:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:19.746 16:16:23 -- common/autotest_common.sh@889 -- # local i 00:39:19.746 16:16:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:19.746 16:16:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:19.746 16:16:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:20.004 16:16:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:20.262 [ 00:39:20.262 { 00:39:20.262 "name": "BaseBdev1", 00:39:20.262 "aliases": [ 00:39:20.262 "8b694a57-20fe-49c8-b83b-05688fc4c1e9" 00:39:20.262 ], 00:39:20.262 "product_name": "Malloc disk", 00:39:20.262 "block_size": 512, 00:39:20.262 "num_blocks": 65536, 00:39:20.262 "uuid": "8b694a57-20fe-49c8-b83b-05688fc4c1e9", 00:39:20.262 "assigned_rate_limits": { 00:39:20.262 "rw_ios_per_sec": 0, 00:39:20.262 "rw_mbytes_per_sec": 0, 00:39:20.262 "r_mbytes_per_sec": 0, 00:39:20.262 "w_mbytes_per_sec": 0 00:39:20.262 }, 00:39:20.262 "claimed": false, 00:39:20.262 "zoned": false, 00:39:20.262 "supported_io_types": { 00:39:20.262 "read": true, 00:39:20.262 "write": true, 00:39:20.262 "unmap": true, 00:39:20.262 "write_zeroes": true, 00:39:20.262 "flush": true, 00:39:20.262 "reset": true, 00:39:20.262 "compare": false, 00:39:20.262 "compare_and_write": false, 00:39:20.262 "abort": true, 00:39:20.262 "nvme_admin": false, 00:39:20.262 "nvme_io": false 00:39:20.262 }, 00:39:20.262 "memory_domains": [ 00:39:20.262 { 00:39:20.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:20.262 "dma_device_type": 2 00:39:20.262 } 00:39:20.262 ], 00:39:20.262 "driver_specific": {} 00:39:20.262 } 00:39:20.262 ] 00:39:20.262 16:16:24 -- common/autotest_common.sh@895 -- # return 0 00:39:20.262 16:16:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:39:20.263 [2024-07-22 16:16:24.499927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:20.263 [2024-07-22 16:16:24.502853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:20.263 [2024-07-22 16:16:24.502979] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:20.263 [2024-07-22 16:16:24.503013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:39:20.263 [2024-07-22 16:16:24.503032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:39:20.263 [2024-07-22 16:16:24.503042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:39:20.263 [2024-07-22 16:16:24.503062] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:20.521 16:16:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:20.780 16:16:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:20.780 "name": "Existed_Raid", 00:39:20.780 "uuid": "9624ebc2-4526-416e-bbb0-c50876619c57", 00:39:20.780 "strip_size_kb": 0, 00:39:20.780 "state": "configuring", 00:39:20.780 "raid_level": "raid1", 00:39:20.780 "superblock": true, 00:39:20.780 "num_base_bdevs": 4, 00:39:20.780 "num_base_bdevs_discovered": 1, 00:39:20.780 "num_base_bdevs_operational": 4, 00:39:20.780 "base_bdevs_list": [ 00:39:20.780 { 00:39:20.780 "name": "BaseBdev1", 00:39:20.780 "uuid": "8b694a57-20fe-49c8-b83b-05688fc4c1e9", 00:39:20.780 "is_configured": true, 00:39:20.780 "data_offset": 2048, 00:39:20.780 "data_size": 63488 00:39:20.780 }, 00:39:20.780 { 00:39:20.780 "name": "BaseBdev2", 00:39:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:20.780 "is_configured": false, 00:39:20.780 "data_offset": 0, 00:39:20.780 "data_size": 0 00:39:20.780 }, 00:39:20.780 { 00:39:20.780 "name": "BaseBdev3", 00:39:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:20.780 "is_configured": false, 00:39:20.780 "data_offset": 0, 00:39:20.780 "data_size": 0 00:39:20.780 }, 00:39:20.780 { 00:39:20.780 "name": "BaseBdev4", 00:39:20.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:20.780 "is_configured": false, 00:39:20.780 "data_offset": 0, 00:39:20.780 "data_size": 0 00:39:20.780 } 00:39:20.780 ] 00:39:20.780 }' 00:39:20.780 16:16:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:20.780 16:16:24 -- common/autotest_common.sh@10 -- # set +x 00:39:21.039 16:16:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:39:21.297 [2024-07-22 16:16:25.503171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:21.297 BaseBdev2 00:39:21.297 16:16:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:39:21.297 16:16:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:39:21.297 16:16:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:21.297 16:16:25 -- common/autotest_common.sh@889 -- # local i 00:39:21.297 16:16:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:21.297 16:16:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:21.297 16:16:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:21.554 16:16:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:21.812 [ 00:39:21.812 { 00:39:21.812 "name": "BaseBdev2", 00:39:21.812 "aliases": [ 00:39:21.812 "c47cdbcd-8b86-4db5-b1d3-9c8018930b99" 00:39:21.812 ], 00:39:21.812 "product_name": "Malloc disk", 00:39:21.812 "block_size": 512, 00:39:21.812 "num_blocks": 65536, 00:39:21.812 "uuid": "c47cdbcd-8b86-4db5-b1d3-9c8018930b99", 00:39:21.812 "assigned_rate_limits": { 00:39:21.812 "rw_ios_per_sec": 0, 00:39:21.812 "rw_mbytes_per_sec": 0, 00:39:21.812 "r_mbytes_per_sec": 0, 00:39:21.812 "w_mbytes_per_sec": 0 00:39:21.812 }, 00:39:21.812 "claimed": true, 00:39:21.812 "claim_type": "exclusive_write", 00:39:21.812 "zoned": false, 00:39:21.812 "supported_io_types": { 00:39:21.812 "read": true, 00:39:21.812 "write": true, 00:39:21.812 "unmap": true, 00:39:21.812 "write_zeroes": true, 00:39:21.812 "flush": true, 00:39:21.812 "reset": true, 00:39:21.812 "compare": false, 00:39:21.812 "compare_and_write": false, 00:39:21.812 "abort": true, 00:39:21.812 "nvme_admin": false, 00:39:21.812 "nvme_io": false 00:39:21.812 }, 00:39:21.812 "memory_domains": [ 00:39:21.812 { 00:39:21.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:21.812 "dma_device_type": 2 00:39:21.812 } 00:39:21.812 ], 00:39:21.812 "driver_specific": {} 00:39:21.812 } 00:39:21.812 ] 00:39:21.812 16:16:26 -- common/autotest_common.sh@895 -- # return 0 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:21.812 16:16:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:21.813 16:16:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:21.813 16:16:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:21.813 16:16:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:22.070 16:16:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:22.071 "name": "Existed_Raid", 00:39:22.071 "uuid": "9624ebc2-4526-416e-bbb0-c50876619c57", 00:39:22.071 "strip_size_kb": 0, 00:39:22.071 "state": "configuring", 00:39:22.071 "raid_level": "raid1", 00:39:22.071 "superblock": true, 00:39:22.071 "num_base_bdevs": 4, 00:39:22.071 "num_base_bdevs_discovered": 2, 00:39:22.071 "num_base_bdevs_operational": 4, 00:39:22.071 "base_bdevs_list": [ 00:39:22.071 { 00:39:22.071 "name": "BaseBdev1", 00:39:22.071 "uuid": "8b694a57-20fe-49c8-b83b-05688fc4c1e9", 00:39:22.071 "is_configured": true, 00:39:22.071 "data_offset": 2048, 00:39:22.071 "data_size": 63488 00:39:22.071 }, 00:39:22.071 { 00:39:22.071 "name": "BaseBdev2", 00:39:22.071 "uuid": "c47cdbcd-8b86-4db5-b1d3-9c8018930b99", 00:39:22.071 "is_configured": true, 00:39:22.071 "data_offset": 2048, 00:39:22.071 "data_size": 63488 00:39:22.071 }, 00:39:22.071 { 00:39:22.071 "name": "BaseBdev3", 00:39:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.071 "is_configured": false, 00:39:22.071 "data_offset": 0, 00:39:22.071 "data_size": 0 00:39:22.071 }, 00:39:22.071 { 00:39:22.071 "name": "BaseBdev4", 00:39:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.071 "is_configured": false, 00:39:22.071 "data_offset": 0, 00:39:22.071 "data_size": 0 00:39:22.071 } 00:39:22.071 ] 00:39:22.071 }' 00:39:22.071 16:16:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:22.071 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:39:22.635 16:16:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:39:22.892 [2024-07-22 16:16:26.946652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:39:22.892 BaseBdev3 00:39:22.892 16:16:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:39:22.892 16:16:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:39:22.892 16:16:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:22.892 16:16:26 -- common/autotest_common.sh@889 -- # local i 00:39:22.892 16:16:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:22.892 16:16:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:22.892 16:16:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:23.160 16:16:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:39:23.417 [ 00:39:23.417 { 00:39:23.417 "name": "BaseBdev3", 00:39:23.417 "aliases": [ 00:39:23.417 "4bf3ce07-89eb-45e5-ab6c-2b5014f1ce9f" 00:39:23.417 ], 00:39:23.417 "product_name": "Malloc disk", 00:39:23.417 "block_size": 512, 00:39:23.417 "num_blocks": 65536, 00:39:23.417 "uuid": "4bf3ce07-89eb-45e5-ab6c-2b5014f1ce9f", 00:39:23.417 "assigned_rate_limits": { 00:39:23.417 "rw_ios_per_sec": 0, 00:39:23.417 "rw_mbytes_per_sec": 0, 00:39:23.417 "r_mbytes_per_sec": 0, 00:39:23.417 "w_mbytes_per_sec": 0 00:39:23.417 }, 00:39:23.417 "claimed": true, 00:39:23.417 "claim_type": "exclusive_write", 00:39:23.417 "zoned": false, 00:39:23.417 "supported_io_types": { 00:39:23.417 "read": true, 00:39:23.417 "write": true, 00:39:23.417 "unmap": true, 00:39:23.417 "write_zeroes": true, 00:39:23.417 "flush": true, 00:39:23.417 "reset": true, 00:39:23.417 "compare": false, 00:39:23.417 "compare_and_write": false, 00:39:23.417 "abort": true, 00:39:23.417 "nvme_admin": false, 00:39:23.417 "nvme_io": false 00:39:23.417 }, 00:39:23.417 "memory_domains": [ 00:39:23.417 { 00:39:23.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:23.417 "dma_device_type": 2 00:39:23.417 } 00:39:23.417 ], 00:39:23.417 "driver_specific": {} 00:39:23.417 } 00:39:23.417 ] 00:39:23.417 16:16:27 -- common/autotest_common.sh@895 -- # return 0 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:23.417 16:16:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:23.674 16:16:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:23.674 "name": "Existed_Raid", 00:39:23.674 "uuid": "9624ebc2-4526-416e-bbb0-c50876619c57", 00:39:23.674 "strip_size_kb": 0, 00:39:23.674 "state": "configuring", 00:39:23.674 "raid_level": "raid1", 00:39:23.674 "superblock": true, 00:39:23.674 "num_base_bdevs": 4, 00:39:23.674 "num_base_bdevs_discovered": 3, 00:39:23.674 "num_base_bdevs_operational": 4, 00:39:23.674 "base_bdevs_list": [ 00:39:23.674 { 00:39:23.674 "name": "BaseBdev1", 00:39:23.674 "uuid": "8b694a57-20fe-49c8-b83b-05688fc4c1e9", 00:39:23.674 "is_configured": true, 00:39:23.674 "data_offset": 2048, 00:39:23.674 "data_size": 63488 00:39:23.674 }, 00:39:23.674 { 00:39:23.674 "name": "BaseBdev2", 00:39:23.674 "uuid": "c47cdbcd-8b86-4db5-b1d3-9c8018930b99", 00:39:23.674 "is_configured": true, 00:39:23.674 "data_offset": 2048, 00:39:23.674 "data_size": 63488 00:39:23.674 }, 00:39:23.674 { 00:39:23.674 "name": "BaseBdev3", 00:39:23.674 "uuid": "4bf3ce07-89eb-45e5-ab6c-2b5014f1ce9f", 00:39:23.674 "is_configured": true, 00:39:23.674 "data_offset": 2048, 00:39:23.674 "data_size": 63488 00:39:23.674 }, 00:39:23.674 { 00:39:23.674 "name": "BaseBdev4", 00:39:23.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:23.674 "is_configured": false, 00:39:23.674 "data_offset": 0, 00:39:23.674 "data_size": 0 00:39:23.674 } 00:39:23.674 ] 00:39:23.674 }' 00:39:23.674 16:16:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:23.674 16:16:27 -- common/autotest_common.sh@10 -- # set +x 00:39:23.931 16:16:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:39:24.189 [2024-07-22 16:16:28.287614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:39:24.189 [2024-07-22 16:16:28.288251] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:39:24.189 [2024-07-22 16:16:28.288330] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:24.189 [2024-07-22 16:16:28.288591] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:39:24.189 [2024-07-22 16:16:28.289252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:39:24.189 [2024-07-22 16:16:28.289310] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:39:24.189 BaseBdev4 00:39:24.189 [2024-07-22 16:16:28.289628] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:24.189 16:16:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:39:24.189 16:16:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:39:24.189 16:16:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:39:24.189 16:16:28 -- common/autotest_common.sh@889 -- # local i 00:39:24.189 16:16:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:39:24.189 16:16:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:39:24.189 16:16:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:39:24.447 16:16:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:39:24.705 [ 00:39:24.705 { 00:39:24.705 "name": "BaseBdev4", 00:39:24.705 "aliases": [ 00:39:24.705 "ffff9060-c39b-4a6e-815b-2631135cf52e" 00:39:24.705 ], 00:39:24.705 "product_name": "Malloc disk", 00:39:24.705 "block_size": 512, 00:39:24.705 "num_blocks": 65536, 00:39:24.705 "uuid": "ffff9060-c39b-4a6e-815b-2631135cf52e", 00:39:24.705 "assigned_rate_limits": { 00:39:24.705 "rw_ios_per_sec": 0, 00:39:24.705 "rw_mbytes_per_sec": 0, 00:39:24.705 "r_mbytes_per_sec": 0, 00:39:24.705 "w_mbytes_per_sec": 0 00:39:24.705 }, 00:39:24.705 "claimed": true, 00:39:24.705 "claim_type": "exclusive_write", 00:39:24.705 "zoned": false, 00:39:24.705 "supported_io_types": { 00:39:24.705 "read": true, 00:39:24.705 "write": true, 00:39:24.705 "unmap": true, 00:39:24.705 "write_zeroes": true, 00:39:24.705 "flush": true, 00:39:24.705 "reset": true, 00:39:24.705 "compare": false, 00:39:24.705 "compare_and_write": false, 00:39:24.705 "abort": true, 00:39:24.705 "nvme_admin": false, 00:39:24.705 "nvme_io": false 00:39:24.705 }, 00:39:24.705 "memory_domains": [ 00:39:24.705 { 00:39:24.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:24.705 "dma_device_type": 2 00:39:24.705 } 00:39:24.705 ], 00:39:24.705 "driver_specific": {} 00:39:24.705 } 00:39:24.705 ] 00:39:24.705 16:16:28 -- common/autotest_common.sh@895 -- # return 0 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:24.705 16:16:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:24.963 16:16:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:24.963 "name": "Existed_Raid", 00:39:24.963 "uuid": "9624ebc2-4526-416e-bbb0-c50876619c57", 00:39:24.963 "strip_size_kb": 0, 00:39:24.963 "state": "online", 00:39:24.963 "raid_level": "raid1", 00:39:24.963 "superblock": true, 00:39:24.963 "num_base_bdevs": 4, 00:39:24.963 "num_base_bdevs_discovered": 4, 00:39:24.963 "num_base_bdevs_operational": 4, 00:39:24.963 "base_bdevs_list": [ 00:39:24.963 { 00:39:24.963 "name": "BaseBdev1", 00:39:24.963 "uuid": "8b694a57-20fe-49c8-b83b-05688fc4c1e9", 00:39:24.963 "is_configured": true, 00:39:24.963 "data_offset": 2048, 00:39:24.963 "data_size": 63488 00:39:24.963 }, 00:39:24.963 { 00:39:24.963 "name": "BaseBdev2", 00:39:24.963 "uuid": "c47cdbcd-8b86-4db5-b1d3-9c8018930b99", 00:39:24.963 "is_configured": true, 00:39:24.963 "data_offset": 2048, 00:39:24.963 "data_size": 63488 00:39:24.963 }, 00:39:24.963 { 00:39:24.963 "name": "BaseBdev3", 00:39:24.963 "uuid": "4bf3ce07-89eb-45e5-ab6c-2b5014f1ce9f", 00:39:24.963 "is_configured": true, 00:39:24.963 "data_offset": 2048, 00:39:24.963 "data_size": 63488 00:39:24.963 }, 00:39:24.963 { 00:39:24.963 "name": "BaseBdev4", 00:39:24.963 "uuid": "ffff9060-c39b-4a6e-815b-2631135cf52e", 00:39:24.963 "is_configured": true, 00:39:24.963 "data_offset": 2048, 00:39:24.963 "data_size": 63488 00:39:24.963 } 00:39:24.963 ] 00:39:24.963 }' 00:39:24.963 16:16:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:24.963 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:39:25.221 16:16:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:39:25.221 [2024-07-22 16:16:29.484116] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:25.479 16:16:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:25.738 16:16:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:25.738 "name": "Existed_Raid", 00:39:25.738 "uuid": "9624ebc2-4526-416e-bbb0-c50876619c57", 00:39:25.738 "strip_size_kb": 0, 00:39:25.738 "state": "online", 00:39:25.738 "raid_level": "raid1", 00:39:25.738 "superblock": true, 00:39:25.738 "num_base_bdevs": 4, 00:39:25.738 "num_base_bdevs_discovered": 3, 00:39:25.738 "num_base_bdevs_operational": 3, 00:39:25.738 "base_bdevs_list": [ 00:39:25.738 { 00:39:25.738 "name": null, 00:39:25.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.738 "is_configured": false, 00:39:25.738 "data_offset": 2048, 00:39:25.738 "data_size": 63488 00:39:25.738 }, 00:39:25.738 { 00:39:25.738 "name": "BaseBdev2", 00:39:25.738 "uuid": "c47cdbcd-8b86-4db5-b1d3-9c8018930b99", 00:39:25.738 "is_configured": true, 00:39:25.738 "data_offset": 2048, 00:39:25.738 "data_size": 63488 00:39:25.738 }, 00:39:25.738 { 00:39:25.738 "name": "BaseBdev3", 00:39:25.738 "uuid": "4bf3ce07-89eb-45e5-ab6c-2b5014f1ce9f", 00:39:25.738 "is_configured": true, 00:39:25.738 "data_offset": 2048, 00:39:25.738 "data_size": 63488 00:39:25.738 }, 00:39:25.738 { 00:39:25.738 "name": "BaseBdev4", 00:39:25.738 "uuid": "ffff9060-c39b-4a6e-815b-2631135cf52e", 00:39:25.738 "is_configured": true, 00:39:25.738 "data_offset": 2048, 00:39:25.738 "data_size": 63488 00:39:25.738 } 00:39:25.738 ] 00:39:25.738 }' 00:39:25.738 16:16:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:25.738 16:16:29 -- common/autotest_common.sh@10 -- # set +x 00:39:26.041 16:16:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:39:26.041 16:16:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:26.041 16:16:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:26.041 16:16:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.299 16:16:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:26.299 16:16:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:26.299 16:16:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:39:26.555 [2024-07-22 16:16:30.629594] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:26.555 16:16:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:26.555 16:16:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:26.555 16:16:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:26.555 16:16:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:26.814 16:16:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:26.814 16:16:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:26.814 16:16:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:39:27.072 [2024-07-22 16:16:31.219115] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:27.331 16:16:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:39:27.589 [2024-07-22 16:16:31.798646] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:39:27.589 [2024-07-22 16:16:31.798719] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:27.589 [2024-07-22 16:16:31.798809] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:27.847 [2024-07-22 16:16:31.910130] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:27.847 [2024-07-22 16:16:31.910198] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:39:27.847 16:16:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:39:27.847 16:16:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:39:27.847 16:16:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:27.847 16:16:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:39:28.105 16:16:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:39:28.105 16:16:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:39:28.105 16:16:32 -- bdev/bdev_raid.sh@287 -- # killprocess 78413 00:39:28.105 16:16:32 -- common/autotest_common.sh@926 -- # '[' -z 78413 ']' 00:39:28.105 16:16:32 -- common/autotest_common.sh@930 -- # kill -0 78413 00:39:28.105 16:16:32 -- common/autotest_common.sh@931 -- # uname 00:39:28.105 16:16:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:28.105 16:16:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78413 00:39:28.105 killing process with pid 78413 00:39:28.105 16:16:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:28.105 16:16:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:28.105 16:16:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78413' 00:39:28.105 16:16:32 -- common/autotest_common.sh@945 -- # kill 78413 00:39:28.105 16:16:32 -- common/autotest_common.sh@950 -- # wait 78413 00:39:28.105 [2024-07-22 16:16:32.250760] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:28.105 [2024-07-22 16:16:32.250889] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:39:29.478 00:39:29.478 real 0m14.395s 00:39:29.478 user 0m23.652s 00:39:29.478 sys 0m2.384s 00:39:29.478 16:16:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:29.478 ************************************ 00:39:29.478 END TEST raid_state_function_test_sb 00:39:29.478 ************************************ 00:39:29.478 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:39:29.478 16:16:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:39:29.478 16:16:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:29.478 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:39:29.478 ************************************ 00:39:29.478 START TEST raid_superblock_test 00:39:29.478 ************************************ 00:39:29.478 16:16:33 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@357 -- # raid_pid=78833 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@358 -- # waitforlisten 78833 /var/tmp/spdk-raid.sock 00:39:29.478 16:16:33 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:39:29.478 16:16:33 -- common/autotest_common.sh@819 -- # '[' -z 78833 ']' 00:39:29.478 16:16:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:29.478 16:16:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:29.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:29.479 16:16:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:29.479 16:16:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:29.479 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:39:29.738 [2024-07-22 16:16:33.766821] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:39:29.738 [2024-07-22 16:16:33.767013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78833 ] 00:39:29.738 [2024-07-22 16:16:33.943637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.994 [2024-07-22 16:16:34.244726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:30.251 [2024-07-22 16:16:34.486678] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:30.509 16:16:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:30.509 16:16:34 -- common/autotest_common.sh@852 -- # return 0 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:30.509 16:16:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:39:30.767 malloc1 00:39:30.767 16:16:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:31.026 [2024-07-22 16:16:35.252471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:31.026 [2024-07-22 16:16:35.252589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:31.026 [2024-07-22 16:16:35.252651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:39:31.026 [2024-07-22 16:16:35.252684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:31.026 [2024-07-22 16:16:35.256351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:31.026 [2024-07-22 16:16:35.256397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:31.026 pt1 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:31.026 16:16:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:39:31.591 malloc2 00:39:31.591 16:16:35 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:31.591 [2024-07-22 16:16:35.816397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:31.591 [2024-07-22 16:16:35.816506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:31.591 [2024-07-22 16:16:35.816543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:39:31.592 [2024-07-22 16:16:35.816559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:31.592 [2024-07-22 16:16:35.819710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:31.592 [2024-07-22 16:16:35.819758] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:31.592 pt2 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:31.592 16:16:35 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:39:32.159 malloc3 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:32.159 [2024-07-22 16:16:36.363303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:32.159 [2024-07-22 16:16:36.363651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:32.159 [2024-07-22 16:16:36.363714] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:39:32.159 [2024-07-22 16:16:36.363732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:32.159 [2024-07-22 16:16:36.366532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:32.159 [2024-07-22 16:16:36.366577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:32.159 pt3 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:32.159 16:16:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:39:32.416 malloc4 00:39:32.682 16:16:36 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:32.949 [2024-07-22 16:16:36.966736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:32.949 [2024-07-22 16:16:36.966836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:32.949 [2024-07-22 16:16:36.966882] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:39:32.949 [2024-07-22 16:16:36.966898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:32.949 [2024-07-22 16:16:36.970010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:32.949 [2024-07-22 16:16:36.970080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:32.949 pt4 00:39:32.949 16:16:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:39:32.949 16:16:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:39:32.949 16:16:36 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:39:33.207 [2024-07-22 16:16:37.239040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:33.207 [2024-07-22 16:16:37.241493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:33.207 [2024-07-22 16:16:37.241591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:33.207 [2024-07-22 16:16:37.241850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:33.207 [2024-07-22 16:16:37.242141] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:39:33.207 [2024-07-22 16:16:37.242160] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:33.207 [2024-07-22 16:16:37.242307] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:39:33.207 [2024-07-22 16:16:37.242764] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:39:33.207 [2024-07-22 16:16:37.242787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:39:33.207 [2024-07-22 16:16:37.243034] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:33.207 16:16:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:33.465 16:16:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:33.465 "name": "raid_bdev1", 00:39:33.465 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:33.465 "strip_size_kb": 0, 00:39:33.465 "state": "online", 00:39:33.465 "raid_level": "raid1", 00:39:33.465 "superblock": true, 00:39:33.465 "num_base_bdevs": 4, 00:39:33.465 "num_base_bdevs_discovered": 4, 00:39:33.465 "num_base_bdevs_operational": 4, 00:39:33.465 "base_bdevs_list": [ 00:39:33.465 { 00:39:33.465 "name": "pt1", 00:39:33.465 "uuid": "a304285f-27df-5ce5-ae10-601112f58e1b", 00:39:33.465 "is_configured": true, 00:39:33.465 "data_offset": 2048, 00:39:33.465 "data_size": 63488 00:39:33.465 }, 00:39:33.465 { 00:39:33.465 "name": "pt2", 00:39:33.465 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:33.465 "is_configured": true, 00:39:33.465 "data_offset": 2048, 00:39:33.465 "data_size": 63488 00:39:33.465 }, 00:39:33.465 { 00:39:33.465 "name": "pt3", 00:39:33.465 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:33.465 "is_configured": true, 00:39:33.465 "data_offset": 2048, 00:39:33.465 "data_size": 63488 00:39:33.466 }, 00:39:33.466 { 00:39:33.466 "name": "pt4", 00:39:33.466 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:33.466 "is_configured": true, 00:39:33.466 "data_offset": 2048, 00:39:33.466 "data_size": 63488 00:39:33.466 } 00:39:33.466 ] 00:39:33.466 }' 00:39:33.466 16:16:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:33.466 16:16:37 -- common/autotest_common.sh@10 -- # set +x 00:39:33.724 16:16:37 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:33.724 16:16:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:39:33.983 [2024-07-22 16:16:38.127739] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:33.983 16:16:38 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=125fd1e7-7031-4d8f-b9f8-b182ed927ad0 00:39:33.983 16:16:38 -- bdev/bdev_raid.sh@380 -- # '[' -z 125fd1e7-7031-4d8f-b9f8-b182ed927ad0 ']' 00:39:33.983 16:16:38 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:34.241 [2024-07-22 16:16:38.359403] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:34.241 [2024-07-22 16:16:38.359467] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:34.241 [2024-07-22 16:16:38.359564] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:34.241 [2024-07-22 16:16:38.359688] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:34.241 [2024-07-22 16:16:38.359708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:39:34.241 16:16:38 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:39:34.241 16:16:38 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:34.500 16:16:38 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:39:34.500 16:16:38 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:39:34.500 16:16:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:39:34.500 16:16:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:34.758 16:16:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:39:34.758 16:16:38 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:35.017 16:16:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:39:35.017 16:16:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:39:35.275 16:16:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:39:35.275 16:16:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:39:35.534 16:16:39 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:39:35.534 16:16:39 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:35.792 16:16:39 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:39:35.792 16:16:39 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:39:35.792 16:16:39 -- common/autotest_common.sh@640 -- # local es=0 00:39:35.792 16:16:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:39:35.792 16:16:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.792 16:16:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:35.792 16:16:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.792 16:16:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:35.792 16:16:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.792 16:16:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:39:35.792 16:16:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.792 16:16:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:35.792 16:16:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:39:36.051 [2024-07-22 16:16:40.199906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:36.051 [2024-07-22 16:16:40.202167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:36.051 [2024-07-22 16:16:40.202236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:39:36.051 [2024-07-22 16:16:40.202287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:39:36.051 [2024-07-22 16:16:40.202354] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:39:36.051 [2024-07-22 16:16:40.202423] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:39:36.051 [2024-07-22 16:16:40.202456] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:39:36.051 [2024-07-22 16:16:40.202482] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:39:36.051 [2024-07-22 16:16:40.202505] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:36.051 [2024-07-22 16:16:40.202518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:39:36.051 request: 00:39:36.051 { 00:39:36.051 "name": "raid_bdev1", 00:39:36.051 "raid_level": "raid1", 00:39:36.051 "base_bdevs": [ 00:39:36.051 "malloc1", 00:39:36.051 "malloc2", 00:39:36.051 "malloc3", 00:39:36.051 "malloc4" 00:39:36.051 ], 00:39:36.051 "superblock": false, 00:39:36.051 "method": "bdev_raid_create", 00:39:36.051 "req_id": 1 00:39:36.051 } 00:39:36.051 Got JSON-RPC error response 00:39:36.051 response: 00:39:36.051 { 00:39:36.051 "code": -17, 00:39:36.051 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:36.051 } 00:39:36.051 16:16:40 -- common/autotest_common.sh@643 -- # es=1 00:39:36.051 16:16:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:39:36.051 16:16:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:39:36.051 16:16:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:39:36.051 16:16:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:36.051 16:16:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:39:36.309 16:16:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:39:36.309 16:16:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:39:36.309 16:16:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:36.568 [2024-07-22 16:16:40.691975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:36.568 [2024-07-22 16:16:40.692096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:36.568 [2024-07-22 16:16:40.692133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:39:36.568 [2024-07-22 16:16:40.692147] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:36.568 [2024-07-22 16:16:40.694713] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:36.568 [2024-07-22 16:16:40.694756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:36.568 [2024-07-22 16:16:40.694880] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:39:36.568 [2024-07-22 16:16:40.694939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:36.568 pt1 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:36.568 16:16:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:36.826 16:16:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:36.826 "name": "raid_bdev1", 00:39:36.826 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:36.826 "strip_size_kb": 0, 00:39:36.826 "state": "configuring", 00:39:36.826 "raid_level": "raid1", 00:39:36.826 "superblock": true, 00:39:36.826 "num_base_bdevs": 4, 00:39:36.826 "num_base_bdevs_discovered": 1, 00:39:36.826 "num_base_bdevs_operational": 4, 00:39:36.826 "base_bdevs_list": [ 00:39:36.826 { 00:39:36.826 "name": "pt1", 00:39:36.826 "uuid": "a304285f-27df-5ce5-ae10-601112f58e1b", 00:39:36.826 "is_configured": true, 00:39:36.826 "data_offset": 2048, 00:39:36.826 "data_size": 63488 00:39:36.826 }, 00:39:36.826 { 00:39:36.826 "name": null, 00:39:36.826 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:36.826 "is_configured": false, 00:39:36.826 "data_offset": 2048, 00:39:36.826 "data_size": 63488 00:39:36.826 }, 00:39:36.826 { 00:39:36.826 "name": null, 00:39:36.826 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:36.826 "is_configured": false, 00:39:36.826 "data_offset": 2048, 00:39:36.826 "data_size": 63488 00:39:36.826 }, 00:39:36.826 { 00:39:36.826 "name": null, 00:39:36.826 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:36.826 "is_configured": false, 00:39:36.826 "data_offset": 2048, 00:39:36.826 "data_size": 63488 00:39:36.826 } 00:39:36.826 ] 00:39:36.826 }' 00:39:36.826 16:16:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:36.826 16:16:40 -- common/autotest_common.sh@10 -- # set +x 00:39:37.084 16:16:41 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:39:37.084 16:16:41 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:37.341 [2024-07-22 16:16:41.504235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:37.341 [2024-07-22 16:16:41.504337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:37.341 [2024-07-22 16:16:41.504392] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:39:37.341 [2024-07-22 16:16:41.504406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:37.341 [2024-07-22 16:16:41.504943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:37.341 [2024-07-22 16:16:41.504975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:37.341 [2024-07-22 16:16:41.505121] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:39:37.341 [2024-07-22 16:16:41.505153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:37.341 pt2 00:39:37.341 16:16:41 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:37.601 [2024-07-22 16:16:41.700285] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:37.601 16:16:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:37.860 16:16:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:37.860 "name": "raid_bdev1", 00:39:37.860 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:37.860 "strip_size_kb": 0, 00:39:37.860 "state": "configuring", 00:39:37.860 "raid_level": "raid1", 00:39:37.860 "superblock": true, 00:39:37.860 "num_base_bdevs": 4, 00:39:37.860 "num_base_bdevs_discovered": 1, 00:39:37.860 "num_base_bdevs_operational": 4, 00:39:37.860 "base_bdevs_list": [ 00:39:37.860 { 00:39:37.860 "name": "pt1", 00:39:37.860 "uuid": "a304285f-27df-5ce5-ae10-601112f58e1b", 00:39:37.860 "is_configured": true, 00:39:37.860 "data_offset": 2048, 00:39:37.860 "data_size": 63488 00:39:37.860 }, 00:39:37.860 { 00:39:37.860 "name": null, 00:39:37.860 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:37.860 "is_configured": false, 00:39:37.860 "data_offset": 2048, 00:39:37.860 "data_size": 63488 00:39:37.860 }, 00:39:37.860 { 00:39:37.860 "name": null, 00:39:37.860 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:37.860 "is_configured": false, 00:39:37.860 "data_offset": 2048, 00:39:37.860 "data_size": 63488 00:39:37.860 }, 00:39:37.860 { 00:39:37.860 "name": null, 00:39:37.860 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:37.860 "is_configured": false, 00:39:37.860 "data_offset": 2048, 00:39:37.860 "data_size": 63488 00:39:37.860 } 00:39:37.860 ] 00:39:37.860 }' 00:39:37.860 16:16:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:37.860 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:39:38.118 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:39:38.118 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:39:38.118 16:16:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:38.377 [2024-07-22 16:16:42.420436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:38.377 [2024-07-22 16:16:42.420536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.377 [2024-07-22 16:16:42.420582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:39:38.377 [2024-07-22 16:16:42.420601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.377 [2024-07-22 16:16:42.421190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.377 [2024-07-22 16:16:42.421222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:38.377 [2024-07-22 16:16:42.421336] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:39:38.377 [2024-07-22 16:16:42.421372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:38.377 pt2 00:39:38.377 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:39:38.377 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:39:38.377 16:16:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:38.635 [2024-07-22 16:16:42.680532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:38.635 [2024-07-22 16:16:42.680631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.635 [2024-07-22 16:16:42.680661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:39:38.635 [2024-07-22 16:16:42.680679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.635 [2024-07-22 16:16:42.681271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.635 [2024-07-22 16:16:42.681300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:38.635 [2024-07-22 16:16:42.681427] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:39:38.635 [2024-07-22 16:16:42.681459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:38.635 pt3 00:39:38.635 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:39:38.635 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:39:38.635 16:16:42 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:38.635 [2024-07-22 16:16:42.904649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:38.635 [2024-07-22 16:16:42.905071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:38.635 [2024-07-22 16:16:42.905114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:39:38.635 [2024-07-22 16:16:42.905132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:38.635 [2024-07-22 16:16:42.905684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:38.635 [2024-07-22 16:16:42.905712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:38.635 [2024-07-22 16:16:42.905813] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:39:38.635 [2024-07-22 16:16:42.905856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:38.635 [2024-07-22 16:16:42.906053] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:39:38.635 [2024-07-22 16:16:42.906071] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:38.635 [2024-07-22 16:16:42.906178] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:39:38.635 [2024-07-22 16:16:42.906515] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:39:38.635 [2024-07-22 16:16:42.906531] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:39:38.635 [2024-07-22 16:16:42.906703] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:38.892 pt4 00:39:38.892 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:38.893 16:16:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:38.893 16:16:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:38.893 "name": "raid_bdev1", 00:39:38.893 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:38.893 "strip_size_kb": 0, 00:39:38.893 "state": "online", 00:39:38.893 "raid_level": "raid1", 00:39:38.893 "superblock": true, 00:39:38.893 "num_base_bdevs": 4, 00:39:38.893 "num_base_bdevs_discovered": 4, 00:39:38.893 "num_base_bdevs_operational": 4, 00:39:38.893 "base_bdevs_list": [ 00:39:38.893 { 00:39:38.893 "name": "pt1", 00:39:38.893 "uuid": "a304285f-27df-5ce5-ae10-601112f58e1b", 00:39:38.893 "is_configured": true, 00:39:38.893 "data_offset": 2048, 00:39:38.893 "data_size": 63488 00:39:38.893 }, 00:39:38.893 { 00:39:38.893 "name": "pt2", 00:39:38.893 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:38.893 "is_configured": true, 00:39:38.893 "data_offset": 2048, 00:39:38.893 "data_size": 63488 00:39:38.893 }, 00:39:38.893 { 00:39:38.893 "name": "pt3", 00:39:38.893 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:38.893 "is_configured": true, 00:39:38.893 "data_offset": 2048, 00:39:38.893 "data_size": 63488 00:39:38.893 }, 00:39:38.893 { 00:39:38.893 "name": "pt4", 00:39:38.893 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:38.893 "is_configured": true, 00:39:38.893 "data_offset": 2048, 00:39:38.893 "data_size": 63488 00:39:38.893 } 00:39:38.893 ] 00:39:38.893 }' 00:39:38.893 16:16:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:38.893 16:16:43 -- common/autotest_common.sh@10 -- # set +x 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:39.458 [2024-07-22 16:16:43.705180] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@430 -- # '[' 125fd1e7-7031-4d8f-b9f8-b182ed927ad0 '!=' 125fd1e7-7031-4d8f-b9f8-b182ed927ad0 ']' 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:39:39.458 16:16:43 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:39:39.716 [2024-07-22 16:16:43.969018] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:39.985 16:16:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:39.985 16:16:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:39.985 "name": "raid_bdev1", 00:39:39.985 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:39.985 "strip_size_kb": 0, 00:39:39.985 "state": "online", 00:39:39.985 "raid_level": "raid1", 00:39:39.985 "superblock": true, 00:39:39.985 "num_base_bdevs": 4, 00:39:39.985 "num_base_bdevs_discovered": 3, 00:39:39.985 "num_base_bdevs_operational": 3, 00:39:39.985 "base_bdevs_list": [ 00:39:39.985 { 00:39:39.985 "name": null, 00:39:39.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:39.985 "is_configured": false, 00:39:39.985 "data_offset": 2048, 00:39:39.985 "data_size": 63488 00:39:39.985 }, 00:39:39.985 { 00:39:39.985 "name": "pt2", 00:39:39.985 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:39.985 "is_configured": true, 00:39:39.985 "data_offset": 2048, 00:39:39.985 "data_size": 63488 00:39:39.985 }, 00:39:39.985 { 00:39:39.985 "name": "pt3", 00:39:39.985 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:39.985 "is_configured": true, 00:39:39.985 "data_offset": 2048, 00:39:39.985 "data_size": 63488 00:39:39.985 }, 00:39:39.985 { 00:39:39.985 "name": "pt4", 00:39:39.985 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:39.985 "is_configured": true, 00:39:39.985 "data_offset": 2048, 00:39:39.985 "data_size": 63488 00:39:39.985 } 00:39:39.985 ] 00:39:39.985 }' 00:39:39.985 16:16:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:39.985 16:16:44 -- common/autotest_common.sh@10 -- # set +x 00:39:40.596 16:16:44 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:40.596 [2024-07-22 16:16:44.817338] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:40.596 [2024-07-22 16:16:44.817392] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:40.596 [2024-07-22 16:16:44.817484] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:40.596 [2024-07-22 16:16:44.817611] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:40.596 [2024-07-22 16:16:44.817629] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:39:40.596 16:16:44 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:40.596 16:16:44 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:39:40.854 16:16:45 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:39:40.854 16:16:45 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:39:40.854 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:39:40.854 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:39:40.854 16:16:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:41.112 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:39:41.112 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:39:41.112 16:16:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:39:41.370 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:39:41.370 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:39:41.370 16:16:45 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:39:41.935 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:39:41.935 16:16:45 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:39:41.935 16:16:45 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:39:41.935 16:16:45 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:39:41.935 16:16:45 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:41.935 [2024-07-22 16:16:46.125836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:41.935 [2024-07-22 16:16:46.126220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:41.935 [2024-07-22 16:16:46.126274] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:39:41.935 [2024-07-22 16:16:46.126291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:41.935 [2024-07-22 16:16:46.129059] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:41.935 [2024-07-22 16:16:46.129106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:41.935 [2024-07-22 16:16:46.129221] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:39:41.935 [2024-07-22 16:16:46.129288] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:41.935 pt2 00:39:41.935 16:16:46 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:41.936 16:16:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:42.193 16:16:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:42.193 "name": "raid_bdev1", 00:39:42.193 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:42.193 "strip_size_kb": 0, 00:39:42.193 "state": "configuring", 00:39:42.193 "raid_level": "raid1", 00:39:42.193 "superblock": true, 00:39:42.193 "num_base_bdevs": 4, 00:39:42.193 "num_base_bdevs_discovered": 1, 00:39:42.193 "num_base_bdevs_operational": 3, 00:39:42.193 "base_bdevs_list": [ 00:39:42.193 { 00:39:42.193 "name": null, 00:39:42.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:42.193 "is_configured": false, 00:39:42.193 "data_offset": 2048, 00:39:42.193 "data_size": 63488 00:39:42.193 }, 00:39:42.193 { 00:39:42.193 "name": "pt2", 00:39:42.193 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:42.193 "is_configured": true, 00:39:42.193 "data_offset": 2048, 00:39:42.193 "data_size": 63488 00:39:42.193 }, 00:39:42.193 { 00:39:42.193 "name": null, 00:39:42.193 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:42.193 "is_configured": false, 00:39:42.193 "data_offset": 2048, 00:39:42.193 "data_size": 63488 00:39:42.193 }, 00:39:42.193 { 00:39:42.193 "name": null, 00:39:42.193 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:42.193 "is_configured": false, 00:39:42.193 "data_offset": 2048, 00:39:42.193 "data_size": 63488 00:39:42.193 } 00:39:42.193 ] 00:39:42.193 }' 00:39:42.193 16:16:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:42.193 16:16:46 -- common/autotest_common.sh@10 -- # set +x 00:39:42.780 16:16:46 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:39:42.780 16:16:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:39:42.780 16:16:46 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:42.780 [2024-07-22 16:16:46.977986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:42.780 [2024-07-22 16:16:46.978126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:42.780 [2024-07-22 16:16:46.978165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:39:42.780 [2024-07-22 16:16:46.978181] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:42.780 [2024-07-22 16:16:46.978715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:42.780 [2024-07-22 16:16:46.978752] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:42.780 [2024-07-22 16:16:46.978871] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:39:42.780 [2024-07-22 16:16:46.978902] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:42.780 pt3 00:39:42.780 16:16:46 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:42.780 16:16:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:42.780 16:16:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:43.038 16:16:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:43.038 "name": "raid_bdev1", 00:39:43.038 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:43.038 "strip_size_kb": 0, 00:39:43.038 "state": "configuring", 00:39:43.038 "raid_level": "raid1", 00:39:43.038 "superblock": true, 00:39:43.038 "num_base_bdevs": 4, 00:39:43.038 "num_base_bdevs_discovered": 2, 00:39:43.038 "num_base_bdevs_operational": 3, 00:39:43.038 "base_bdevs_list": [ 00:39:43.038 { 00:39:43.038 "name": null, 00:39:43.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:43.038 "is_configured": false, 00:39:43.038 "data_offset": 2048, 00:39:43.038 "data_size": 63488 00:39:43.038 }, 00:39:43.038 { 00:39:43.038 "name": "pt2", 00:39:43.038 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:43.038 "is_configured": true, 00:39:43.038 "data_offset": 2048, 00:39:43.038 "data_size": 63488 00:39:43.038 }, 00:39:43.038 { 00:39:43.038 "name": "pt3", 00:39:43.038 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:43.038 "is_configured": true, 00:39:43.038 "data_offset": 2048, 00:39:43.038 "data_size": 63488 00:39:43.038 }, 00:39:43.038 { 00:39:43.038 "name": null, 00:39:43.038 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:43.038 "is_configured": false, 00:39:43.038 "data_offset": 2048, 00:39:43.038 "data_size": 63488 00:39:43.038 } 00:39:43.038 ] 00:39:43.038 }' 00:39:43.038 16:16:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:43.038 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@462 -- # i=3 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:43.605 [2024-07-22 16:16:47.850229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:43.605 [2024-07-22 16:16:47.850346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:43.605 [2024-07-22 16:16:47.850390] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:39:43.605 [2024-07-22 16:16:47.850406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:43.605 [2024-07-22 16:16:47.850953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:43.605 [2024-07-22 16:16:47.850979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:43.605 [2024-07-22 16:16:47.851114] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:39:43.605 [2024-07-22 16:16:47.851174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:43.605 [2024-07-22 16:16:47.851342] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:39:43.605 [2024-07-22 16:16:47.851367] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:43.605 [2024-07-22 16:16:47.851476] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:39:43.605 [2024-07-22 16:16:47.851884] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:39:43.605 [2024-07-22 16:16:47.851916] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:39:43.605 [2024-07-22 16:16:47.852090] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:43.605 pt4 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:43.605 16:16:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:44.171 16:16:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:44.171 "name": "raid_bdev1", 00:39:44.171 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:44.171 "strip_size_kb": 0, 00:39:44.171 "state": "online", 00:39:44.171 "raid_level": "raid1", 00:39:44.171 "superblock": true, 00:39:44.171 "num_base_bdevs": 4, 00:39:44.171 "num_base_bdevs_discovered": 3, 00:39:44.171 "num_base_bdevs_operational": 3, 00:39:44.171 "base_bdevs_list": [ 00:39:44.171 { 00:39:44.171 "name": null, 00:39:44.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:44.171 "is_configured": false, 00:39:44.171 "data_offset": 2048, 00:39:44.171 "data_size": 63488 00:39:44.171 }, 00:39:44.171 { 00:39:44.171 "name": "pt2", 00:39:44.171 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:44.171 "is_configured": true, 00:39:44.171 "data_offset": 2048, 00:39:44.171 "data_size": 63488 00:39:44.171 }, 00:39:44.171 { 00:39:44.171 "name": "pt3", 00:39:44.171 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:44.171 "is_configured": true, 00:39:44.171 "data_offset": 2048, 00:39:44.171 "data_size": 63488 00:39:44.171 }, 00:39:44.171 { 00:39:44.171 "name": "pt4", 00:39:44.171 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:44.171 "is_configured": true, 00:39:44.171 "data_offset": 2048, 00:39:44.171 "data_size": 63488 00:39:44.171 } 00:39:44.171 ] 00:39:44.171 }' 00:39:44.171 16:16:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:44.171 16:16:48 -- common/autotest_common.sh@10 -- # set +x 00:39:44.428 16:16:48 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:39:44.428 16:16:48 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:39:44.686 [2024-07-22 16:16:48.742511] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:44.686 [2024-07-22 16:16:48.742573] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:44.686 [2024-07-22 16:16:48.742668] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:44.686 [2024-07-22 16:16:48.742758] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:44.686 [2024-07-22 16:16:48.742781] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:39:44.686 16:16:48 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:44.686 16:16:48 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:39:44.944 16:16:49 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:39:44.944 16:16:49 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:39:44.944 16:16:49 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:45.202 [2024-07-22 16:16:49.278793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:45.202 [2024-07-22 16:16:49.279138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:45.202 [2024-07-22 16:16:49.279183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:39:45.202 [2024-07-22 16:16:49.279203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:45.202 [2024-07-22 16:16:49.281944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:45.202 [2024-07-22 16:16:49.282007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:45.202 [2024-07-22 16:16:49.282127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:39:45.203 [2024-07-22 16:16:49.282198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:45.203 pt1 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:45.203 16:16:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:45.461 16:16:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:45.461 "name": "raid_bdev1", 00:39:45.461 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:45.461 "strip_size_kb": 0, 00:39:45.461 "state": "configuring", 00:39:45.461 "raid_level": "raid1", 00:39:45.461 "superblock": true, 00:39:45.461 "num_base_bdevs": 4, 00:39:45.461 "num_base_bdevs_discovered": 1, 00:39:45.461 "num_base_bdevs_operational": 4, 00:39:45.461 "base_bdevs_list": [ 00:39:45.461 { 00:39:45.461 "name": "pt1", 00:39:45.461 "uuid": "a304285f-27df-5ce5-ae10-601112f58e1b", 00:39:45.461 "is_configured": true, 00:39:45.461 "data_offset": 2048, 00:39:45.461 "data_size": 63488 00:39:45.461 }, 00:39:45.461 { 00:39:45.461 "name": null, 00:39:45.461 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:45.461 "is_configured": false, 00:39:45.461 "data_offset": 2048, 00:39:45.461 "data_size": 63488 00:39:45.461 }, 00:39:45.461 { 00:39:45.461 "name": null, 00:39:45.461 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:45.461 "is_configured": false, 00:39:45.461 "data_offset": 2048, 00:39:45.461 "data_size": 63488 00:39:45.461 }, 00:39:45.461 { 00:39:45.461 "name": null, 00:39:45.461 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:45.461 "is_configured": false, 00:39:45.461 "data_offset": 2048, 00:39:45.461 "data_size": 63488 00:39:45.461 } 00:39:45.461 ] 00:39:45.461 }' 00:39:45.461 16:16:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:45.461 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:39:45.719 16:16:49 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:39:45.719 16:16:49 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:39:45.719 16:16:49 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:39:45.982 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:39:45.982 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:39:45.982 16:16:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:39:46.241 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:39:46.241 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:39:46.241 16:16:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:39:46.499 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:39:46.499 16:16:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:39:46.499 16:16:50 -- bdev/bdev_raid.sh@489 -- # i=3 00:39:46.499 16:16:50 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:39:46.757 [2024-07-22 16:16:50.879107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:39:46.757 [2024-07-22 16:16:50.879219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:46.757 [2024-07-22 16:16:50.879252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:39:46.757 [2024-07-22 16:16:50.879271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:46.757 [2024-07-22 16:16:50.879836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:46.757 [2024-07-22 16:16:50.879868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:39:46.757 [2024-07-22 16:16:50.879980] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:39:46.757 [2024-07-22 16:16:50.880028] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:46.757 [2024-07-22 16:16:50.880042] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:46.757 [2024-07-22 16:16:50.880086] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:39:46.757 [2024-07-22 16:16:50.880167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:39:46.757 pt4 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:46.757 16:16:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.014 16:16:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:47.014 "name": "raid_bdev1", 00:39:47.014 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:47.014 "strip_size_kb": 0, 00:39:47.014 "state": "configuring", 00:39:47.014 "raid_level": "raid1", 00:39:47.014 "superblock": true, 00:39:47.014 "num_base_bdevs": 4, 00:39:47.014 "num_base_bdevs_discovered": 1, 00:39:47.014 "num_base_bdevs_operational": 3, 00:39:47.014 "base_bdevs_list": [ 00:39:47.014 { 00:39:47.014 "name": null, 00:39:47.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:47.014 "is_configured": false, 00:39:47.014 "data_offset": 2048, 00:39:47.014 "data_size": 63488 00:39:47.014 }, 00:39:47.014 { 00:39:47.014 "name": null, 00:39:47.014 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:47.014 "is_configured": false, 00:39:47.014 "data_offset": 2048, 00:39:47.015 "data_size": 63488 00:39:47.015 }, 00:39:47.015 { 00:39:47.015 "name": null, 00:39:47.015 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:47.015 "is_configured": false, 00:39:47.015 "data_offset": 2048, 00:39:47.015 "data_size": 63488 00:39:47.015 }, 00:39:47.015 { 00:39:47.015 "name": "pt4", 00:39:47.015 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:47.015 "is_configured": true, 00:39:47.015 "data_offset": 2048, 00:39:47.015 "data_size": 63488 00:39:47.015 } 00:39:47.015 ] 00:39:47.015 }' 00:39:47.015 16:16:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:47.015 16:16:51 -- common/autotest_common.sh@10 -- # set +x 00:39:47.272 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:39:47.272 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:39:47.272 16:16:51 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:47.530 [2024-07-22 16:16:51.675390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:47.530 [2024-07-22 16:16:51.675524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:47.530 [2024-07-22 16:16:51.675567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:39:47.530 [2024-07-22 16:16:51.675584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:47.530 [2024-07-22 16:16:51.676160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:47.530 [2024-07-22 16:16:51.676187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:47.530 [2024-07-22 16:16:51.676297] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:39:47.530 [2024-07-22 16:16:51.676327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:47.530 pt2 00:39:47.530 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:39:47.530 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:39:47.530 16:16:51 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:39:47.787 [2024-07-22 16:16:51.907846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:39:47.787 [2024-07-22 16:16:51.908063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:47.787 [2024-07-22 16:16:51.908144] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:39:47.787 [2024-07-22 16:16:51.908166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:47.787 [2024-07-22 16:16:51.909135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:47.787 [2024-07-22 16:16:51.909176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:39:47.787 [2024-07-22 16:16:51.909375] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:39:47.787 [2024-07-22 16:16:51.909431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:39:47.787 [2024-07-22 16:16:51.909698] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:39:47.787 [2024-07-22 16:16:51.909715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:39:47.787 [2024-07-22 16:16:51.909874] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:39:47.787 [2024-07-22 16:16:51.910398] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:39:47.787 [2024-07-22 16:16:51.910433] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:39:47.787 [2024-07-22 16:16:51.910668] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:47.787 pt3 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:47.787 16:16:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:48.044 16:16:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:48.044 "name": "raid_bdev1", 00:39:48.044 "uuid": "125fd1e7-7031-4d8f-b9f8-b182ed927ad0", 00:39:48.044 "strip_size_kb": 0, 00:39:48.044 "state": "online", 00:39:48.044 "raid_level": "raid1", 00:39:48.044 "superblock": true, 00:39:48.044 "num_base_bdevs": 4, 00:39:48.044 "num_base_bdevs_discovered": 3, 00:39:48.044 "num_base_bdevs_operational": 3, 00:39:48.044 "base_bdevs_list": [ 00:39:48.044 { 00:39:48.044 "name": null, 00:39:48.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:48.044 "is_configured": false, 00:39:48.044 "data_offset": 2048, 00:39:48.044 "data_size": 63488 00:39:48.044 }, 00:39:48.044 { 00:39:48.044 "name": "pt2", 00:39:48.044 "uuid": "da6ad6d2-92b2-532a-8e34-7db765fb0b75", 00:39:48.044 "is_configured": true, 00:39:48.045 "data_offset": 2048, 00:39:48.045 "data_size": 63488 00:39:48.045 }, 00:39:48.045 { 00:39:48.045 "name": "pt3", 00:39:48.045 "uuid": "5aeb5fad-86e9-5c56-bf7c-fd64cd8f608f", 00:39:48.045 "is_configured": true, 00:39:48.045 "data_offset": 2048, 00:39:48.045 "data_size": 63488 00:39:48.045 }, 00:39:48.045 { 00:39:48.045 "name": "pt4", 00:39:48.045 "uuid": "2431fbbf-29fa-5800-b25b-668694aa0978", 00:39:48.045 "is_configured": true, 00:39:48.045 "data_offset": 2048, 00:39:48.045 "data_size": 63488 00:39:48.045 } 00:39:48.045 ] 00:39:48.045 }' 00:39:48.045 16:16:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:48.045 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:39:48.610 16:16:52 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:48.610 16:16:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:39:48.869 [2024-07-22 16:16:52.964337] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:48.869 16:16:52 -- bdev/bdev_raid.sh@506 -- # '[' 125fd1e7-7031-4d8f-b9f8-b182ed927ad0 '!=' 125fd1e7-7031-4d8f-b9f8-b182ed927ad0 ']' 00:39:48.869 16:16:52 -- bdev/bdev_raid.sh@511 -- # killprocess 78833 00:39:48.869 16:16:52 -- common/autotest_common.sh@926 -- # '[' -z 78833 ']' 00:39:48.869 16:16:52 -- common/autotest_common.sh@930 -- # kill -0 78833 00:39:48.869 16:16:52 -- common/autotest_common.sh@931 -- # uname 00:39:48.869 16:16:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:39:48.869 16:16:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78833 00:39:48.869 killing process with pid 78833 00:39:48.869 16:16:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:39:48.869 16:16:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:39:48.869 16:16:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78833' 00:39:48.869 16:16:53 -- common/autotest_common.sh@945 -- # kill 78833 00:39:48.869 16:16:53 -- common/autotest_common.sh@950 -- # wait 78833 00:39:48.869 [2024-07-22 16:16:53.018493] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:48.869 [2024-07-22 16:16:53.018617] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:48.869 [2024-07-22 16:16:53.018711] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:48.869 [2024-07-22 16:16:53.018732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:39:49.127 [2024-07-22 16:16:53.390572] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:39:50.532 00:39:50.532 real 0m21.041s 00:39:50.532 user 0m35.977s 00:39:50.532 sys 0m3.530s 00:39:50.532 16:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:50.532 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:39:50.532 ************************************ 00:39:50.532 END TEST raid_superblock_test 00:39:50.532 ************************************ 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:39:50.532 16:16:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:39:50.532 16:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:39:50.532 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:39:50.532 ************************************ 00:39:50.532 START TEST raid_rebuild_test 00:39:50.532 ************************************ 00:39:50.532 16:16:54 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@544 -- # raid_pid=79465 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79465 /var/tmp/spdk-raid.sock 00:39:50.532 16:16:54 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:50.532 16:16:54 -- common/autotest_common.sh@819 -- # '[' -z 79465 ']' 00:39:50.532 16:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:39:50.532 16:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:39:50.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:39:50.532 16:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:39:50.532 16:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:39:50.532 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:39:50.791 [2024-07-22 16:16:54.875458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:39:50.791 [2024-07-22 16:16:54.875629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79465 ] 00:39:50.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:50.791 Zero copy mechanism will not be used. 00:39:50.791 [2024-07-22 16:16:55.045317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:51.357 [2024-07-22 16:16:55.331620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.357 [2024-07-22 16:16:55.567407] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:51.616 16:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:39:51.616 16:16:55 -- common/autotest_common.sh@852 -- # return 0 00:39:51.616 16:16:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:51.616 16:16:55 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:51.616 16:16:55 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:39:51.874 BaseBdev1 00:39:51.874 16:16:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:39:51.874 16:16:56 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:39:51.874 16:16:56 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:39:52.133 BaseBdev2 00:39:52.133 16:16:56 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:39:52.391 spare_malloc 00:39:52.391 16:16:56 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:52.650 spare_delay 00:39:52.650 16:16:56 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:39:52.910 [2024-07-22 16:16:57.127357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:52.910 [2024-07-22 16:16:57.127470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:52.910 [2024-07-22 16:16:57.127513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:39:52.910 [2024-07-22 16:16:57.127536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:52.910 [2024-07-22 16:16:57.130484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:52.910 [2024-07-22 16:16:57.130531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:52.910 spare 00:39:52.910 16:16:57 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:39:53.168 [2024-07-22 16:16:57.355549] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:53.168 [2024-07-22 16:16:57.358102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:53.168 [2024-07-22 16:16:57.358223] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:39:53.168 [2024-07-22 16:16:57.358248] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:39:53.168 [2024-07-22 16:16:57.358403] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:39:53.168 [2024-07-22 16:16:57.358841] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:39:53.168 [2024-07-22 16:16:57.358859] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:39:53.168 [2024-07-22 16:16:57.359090] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:39:53.168 16:16:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:39:53.169 16:16:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:53.169 16:16:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:53.427 16:16:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:39:53.427 "name": "raid_bdev1", 00:39:53.427 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:39:53.427 "strip_size_kb": 0, 00:39:53.427 "state": "online", 00:39:53.427 "raid_level": "raid1", 00:39:53.427 "superblock": false, 00:39:53.427 "num_base_bdevs": 2, 00:39:53.427 "num_base_bdevs_discovered": 2, 00:39:53.427 "num_base_bdevs_operational": 2, 00:39:53.427 "base_bdevs_list": [ 00:39:53.427 { 00:39:53.427 "name": "BaseBdev1", 00:39:53.427 "uuid": "92af7b36-bb87-4c33-b478-e674fd0b1b5f", 00:39:53.427 "is_configured": true, 00:39:53.427 "data_offset": 0, 00:39:53.427 "data_size": 65536 00:39:53.427 }, 00:39:53.427 { 00:39:53.427 "name": "BaseBdev2", 00:39:53.427 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:39:53.427 "is_configured": true, 00:39:53.427 "data_offset": 0, 00:39:53.427 "data_size": 65536 00:39:53.427 } 00:39:53.427 ] 00:39:53.427 }' 00:39:53.427 16:16:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:39:53.427 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:39:53.993 16:16:57 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:39:53.993 16:16:57 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:39:53.993 [2024-07-22 16:16:58.240037] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:53.993 16:16:58 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:39:54.251 16:16:58 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:54.251 16:16:58 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:54.509 16:16:58 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:39:54.509 16:16:58 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:39:54.509 16:16:58 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:39:54.509 16:16:58 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@12 -- # local i 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:54.509 16:16:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:39:54.768 [2024-07-22 16:16:58.799976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:39:54.768 /dev/nbd0 00:39:54.768 16:16:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:54.768 16:16:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:54.768 16:16:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:39:54.768 16:16:58 -- common/autotest_common.sh@857 -- # local i 00:39:54.768 16:16:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:39:54.768 16:16:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:39:54.768 16:16:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:39:54.768 16:16:58 -- common/autotest_common.sh@861 -- # break 00:39:54.768 16:16:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:39:54.768 16:16:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:39:54.768 16:16:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:54.768 1+0 records in 00:39:54.768 1+0 records out 00:39:54.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260497 s, 15.7 MB/s 00:39:54.768 16:16:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:54.768 16:16:58 -- common/autotest_common.sh@874 -- # size=4096 00:39:54.768 16:16:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:54.768 16:16:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:39:54.768 16:16:58 -- common/autotest_common.sh@877 -- # return 0 00:39:54.768 16:16:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:54.768 16:16:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:54.768 16:16:58 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:39:54.768 16:16:58 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:39:54.768 16:16:58 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:40:01.362 65536+0 records in 00:40:01.362 65536+0 records out 00:40:01.362 33554432 bytes (34 MB, 32 MiB) copied, 6.25314 s, 5.4 MB/s 00:40:01.362 16:17:05 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@51 -- # local i 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:01.362 [2024-07-22 16:17:05.393567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@41 -- # break 00:40:01.362 16:17:05 -- bdev/nbd_common.sh@45 -- # return 0 00:40:01.363 16:17:05 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:01.363 [2024-07-22 16:17:05.633889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:01.621 16:17:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:01.621 "name": "raid_bdev1", 00:40:01.621 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:01.621 "strip_size_kb": 0, 00:40:01.621 "state": "online", 00:40:01.621 "raid_level": "raid1", 00:40:01.621 "superblock": false, 00:40:01.621 "num_base_bdevs": 2, 00:40:01.621 "num_base_bdevs_discovered": 1, 00:40:01.621 "num_base_bdevs_operational": 1, 00:40:01.621 "base_bdevs_list": [ 00:40:01.621 { 00:40:01.621 "name": null, 00:40:01.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:01.621 "is_configured": false, 00:40:01.621 "data_offset": 0, 00:40:01.621 "data_size": 65536 00:40:01.621 }, 00:40:01.622 { 00:40:01.622 "name": "BaseBdev2", 00:40:01.622 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:01.622 "is_configured": true, 00:40:01.622 "data_offset": 0, 00:40:01.622 "data_size": 65536 00:40:01.622 } 00:40:01.622 ] 00:40:01.622 }' 00:40:01.622 16:17:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:01.622 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:40:02.213 16:17:06 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:02.213 [2024-07-22 16:17:06.426201] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:02.213 [2024-07-22 16:17:06.426275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:02.213 [2024-07-22 16:17:06.443214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09480 00:40:02.213 [2024-07-22 16:17:06.445736] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:02.213 16:17:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:03.584 16:17:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:03.584 "name": "raid_bdev1", 00:40:03.584 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:03.584 "strip_size_kb": 0, 00:40:03.584 "state": "online", 00:40:03.584 "raid_level": "raid1", 00:40:03.584 "superblock": false, 00:40:03.584 "num_base_bdevs": 2, 00:40:03.584 "num_base_bdevs_discovered": 2, 00:40:03.584 "num_base_bdevs_operational": 2, 00:40:03.584 "process": { 00:40:03.584 "type": "rebuild", 00:40:03.584 "target": "spare", 00:40:03.584 "progress": { 00:40:03.584 "blocks": 24576, 00:40:03.584 "percent": 37 00:40:03.584 } 00:40:03.584 }, 00:40:03.584 "base_bdevs_list": [ 00:40:03.584 { 00:40:03.584 "name": "spare", 00:40:03.584 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:03.584 "is_configured": true, 00:40:03.584 "data_offset": 0, 00:40:03.584 "data_size": 65536 00:40:03.584 }, 00:40:03.584 { 00:40:03.585 "name": "BaseBdev2", 00:40:03.585 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:03.585 "is_configured": true, 00:40:03.585 "data_offset": 0, 00:40:03.585 "data_size": 65536 00:40:03.585 } 00:40:03.585 ] 00:40:03.585 }' 00:40:03.585 16:17:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:03.585 16:17:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:03.585 16:17:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:03.585 16:17:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:03.585 16:17:07 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:03.842 [2024-07-22 16:17:07.985539] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:03.842 [2024-07-22 16:17:08.066252] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:03.842 [2024-07-22 16:17:08.066330] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:04.101 16:17:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:04.101 "name": "raid_bdev1", 00:40:04.101 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:04.101 "strip_size_kb": 0, 00:40:04.101 "state": "online", 00:40:04.101 "raid_level": "raid1", 00:40:04.101 "superblock": false, 00:40:04.101 "num_base_bdevs": 2, 00:40:04.101 "num_base_bdevs_discovered": 1, 00:40:04.101 "num_base_bdevs_operational": 1, 00:40:04.101 "base_bdevs_list": [ 00:40:04.101 { 00:40:04.101 "name": null, 00:40:04.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:04.102 "is_configured": false, 00:40:04.102 "data_offset": 0, 00:40:04.102 "data_size": 65536 00:40:04.102 }, 00:40:04.102 { 00:40:04.102 "name": "BaseBdev2", 00:40:04.102 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:04.102 "is_configured": true, 00:40:04.102 "data_offset": 0, 00:40:04.102 "data_size": 65536 00:40:04.102 } 00:40:04.102 ] 00:40:04.102 }' 00:40:04.102 16:17:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:04.102 16:17:08 -- common/autotest_common.sh@10 -- # set +x 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:04.669 16:17:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:04.928 16:17:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:04.928 "name": "raid_bdev1", 00:40:04.928 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:04.928 "strip_size_kb": 0, 00:40:04.928 "state": "online", 00:40:04.928 "raid_level": "raid1", 00:40:04.928 "superblock": false, 00:40:04.928 "num_base_bdevs": 2, 00:40:04.929 "num_base_bdevs_discovered": 1, 00:40:04.929 "num_base_bdevs_operational": 1, 00:40:04.929 "base_bdevs_list": [ 00:40:04.929 { 00:40:04.929 "name": null, 00:40:04.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:04.929 "is_configured": false, 00:40:04.929 "data_offset": 0, 00:40:04.929 "data_size": 65536 00:40:04.929 }, 00:40:04.929 { 00:40:04.929 "name": "BaseBdev2", 00:40:04.929 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:04.929 "is_configured": true, 00:40:04.929 "data_offset": 0, 00:40:04.929 "data_size": 65536 00:40:04.929 } 00:40:04.929 ] 00:40:04.929 }' 00:40:04.929 16:17:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:04.929 16:17:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:04.929 16:17:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:04.929 16:17:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:04.929 16:17:08 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:05.187 [2024-07-22 16:17:09.218066] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:05.187 [2024-07-22 16:17:09.218133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:05.187 [2024-07-22 16:17:09.236884] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09550 00:40:05.187 [2024-07-22 16:17:09.239420] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:05.187 16:17:09 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:06.123 16:17:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:06.382 "name": "raid_bdev1", 00:40:06.382 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:06.382 "strip_size_kb": 0, 00:40:06.382 "state": "online", 00:40:06.382 "raid_level": "raid1", 00:40:06.382 "superblock": false, 00:40:06.382 "num_base_bdevs": 2, 00:40:06.382 "num_base_bdevs_discovered": 2, 00:40:06.382 "num_base_bdevs_operational": 2, 00:40:06.382 "process": { 00:40:06.382 "type": "rebuild", 00:40:06.382 "target": "spare", 00:40:06.382 "progress": { 00:40:06.382 "blocks": 24576, 00:40:06.382 "percent": 37 00:40:06.382 } 00:40:06.382 }, 00:40:06.382 "base_bdevs_list": [ 00:40:06.382 { 00:40:06.382 "name": "spare", 00:40:06.382 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:06.382 "is_configured": true, 00:40:06.382 "data_offset": 0, 00:40:06.382 "data_size": 65536 00:40:06.382 }, 00:40:06.382 { 00:40:06.382 "name": "BaseBdev2", 00:40:06.382 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:06.382 "is_configured": true, 00:40:06.382 "data_offset": 0, 00:40:06.382 "data_size": 65536 00:40:06.382 } 00:40:06.382 ] 00:40:06.382 }' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@657 -- # local timeout=395 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:06.382 16:17:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:06.640 16:17:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:06.641 "name": "raid_bdev1", 00:40:06.641 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:06.641 "strip_size_kb": 0, 00:40:06.641 "state": "online", 00:40:06.641 "raid_level": "raid1", 00:40:06.641 "superblock": false, 00:40:06.641 "num_base_bdevs": 2, 00:40:06.641 "num_base_bdevs_discovered": 2, 00:40:06.641 "num_base_bdevs_operational": 2, 00:40:06.641 "process": { 00:40:06.641 "type": "rebuild", 00:40:06.641 "target": "spare", 00:40:06.641 "progress": { 00:40:06.641 "blocks": 30720, 00:40:06.641 "percent": 46 00:40:06.641 } 00:40:06.641 }, 00:40:06.641 "base_bdevs_list": [ 00:40:06.641 { 00:40:06.641 "name": "spare", 00:40:06.641 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:06.641 "is_configured": true, 00:40:06.641 "data_offset": 0, 00:40:06.641 "data_size": 65536 00:40:06.641 }, 00:40:06.641 { 00:40:06.641 "name": "BaseBdev2", 00:40:06.641 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:06.641 "is_configured": true, 00:40:06.641 "data_offset": 0, 00:40:06.641 "data_size": 65536 00:40:06.641 } 00:40:06.641 ] 00:40:06.641 }' 00:40:06.641 16:17:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:06.641 16:17:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:06.641 16:17:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:06.641 16:17:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:06.641 16:17:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:08.018 16:17:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:08.018 "name": "raid_bdev1", 00:40:08.018 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:08.018 "strip_size_kb": 0, 00:40:08.018 "state": "online", 00:40:08.018 "raid_level": "raid1", 00:40:08.018 "superblock": false, 00:40:08.018 "num_base_bdevs": 2, 00:40:08.018 "num_base_bdevs_discovered": 2, 00:40:08.018 "num_base_bdevs_operational": 2, 00:40:08.018 "process": { 00:40:08.018 "type": "rebuild", 00:40:08.018 "target": "spare", 00:40:08.018 "progress": { 00:40:08.018 "blocks": 57344, 00:40:08.018 "percent": 87 00:40:08.018 } 00:40:08.018 }, 00:40:08.018 "base_bdevs_list": [ 00:40:08.018 { 00:40:08.018 "name": "spare", 00:40:08.018 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:08.018 "is_configured": true, 00:40:08.018 "data_offset": 0, 00:40:08.018 "data_size": 65536 00:40:08.018 }, 00:40:08.018 { 00:40:08.018 "name": "BaseBdev2", 00:40:08.018 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:08.018 "is_configured": true, 00:40:08.018 "data_offset": 0, 00:40:08.018 "data_size": 65536 00:40:08.018 } 00:40:08.018 ] 00:40:08.018 }' 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:08.018 16:17:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:08.277 [2024-07-22 16:17:12.466852] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:08.277 [2024-07-22 16:17:12.467007] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:08.277 [2024-07-22 16:17:12.467083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:09.211 16:17:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.469 16:17:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:09.469 "name": "raid_bdev1", 00:40:09.469 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:09.469 "strip_size_kb": 0, 00:40:09.469 "state": "online", 00:40:09.470 "raid_level": "raid1", 00:40:09.470 "superblock": false, 00:40:09.470 "num_base_bdevs": 2, 00:40:09.470 "num_base_bdevs_discovered": 2, 00:40:09.470 "num_base_bdevs_operational": 2, 00:40:09.470 "base_bdevs_list": [ 00:40:09.470 { 00:40:09.470 "name": "spare", 00:40:09.470 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:09.470 "is_configured": true, 00:40:09.470 "data_offset": 0, 00:40:09.470 "data_size": 65536 00:40:09.470 }, 00:40:09.470 { 00:40:09.470 "name": "BaseBdev2", 00:40:09.470 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:09.470 "is_configured": true, 00:40:09.470 "data_offset": 0, 00:40:09.470 "data_size": 65536 00:40:09.470 } 00:40:09.470 ] 00:40:09.470 }' 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@660 -- # break 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:09.470 16:17:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:09.728 "name": "raid_bdev1", 00:40:09.728 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:09.728 "strip_size_kb": 0, 00:40:09.728 "state": "online", 00:40:09.728 "raid_level": "raid1", 00:40:09.728 "superblock": false, 00:40:09.728 "num_base_bdevs": 2, 00:40:09.728 "num_base_bdevs_discovered": 2, 00:40:09.728 "num_base_bdevs_operational": 2, 00:40:09.728 "base_bdevs_list": [ 00:40:09.728 { 00:40:09.728 "name": "spare", 00:40:09.728 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:09.728 "is_configured": true, 00:40:09.728 "data_offset": 0, 00:40:09.728 "data_size": 65536 00:40:09.728 }, 00:40:09.728 { 00:40:09.728 "name": "BaseBdev2", 00:40:09.728 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:09.728 "is_configured": true, 00:40:09.728 "data_offset": 0, 00:40:09.728 "data_size": 65536 00:40:09.728 } 00:40:09.728 ] 00:40:09.728 }' 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:09.728 16:17:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:09.986 16:17:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:09.986 "name": "raid_bdev1", 00:40:09.986 "uuid": "77d5c541-00cf-4fa0-84f2-5e9c687f2056", 00:40:09.986 "strip_size_kb": 0, 00:40:09.986 "state": "online", 00:40:09.986 "raid_level": "raid1", 00:40:09.986 "superblock": false, 00:40:09.986 "num_base_bdevs": 2, 00:40:09.986 "num_base_bdevs_discovered": 2, 00:40:09.986 "num_base_bdevs_operational": 2, 00:40:09.986 "base_bdevs_list": [ 00:40:09.986 { 00:40:09.986 "name": "spare", 00:40:09.986 "uuid": "1514f6a8-49a8-5599-924b-b5aeed1e13f1", 00:40:09.986 "is_configured": true, 00:40:09.986 "data_offset": 0, 00:40:09.986 "data_size": 65536 00:40:09.986 }, 00:40:09.986 { 00:40:09.986 "name": "BaseBdev2", 00:40:09.986 "uuid": "4e2c397c-a7df-40a5-bc4b-450807a8262e", 00:40:09.986 "is_configured": true, 00:40:09.986 "data_offset": 0, 00:40:09.986 "data_size": 65536 00:40:09.986 } 00:40:09.986 ] 00:40:09.986 }' 00:40:09.986 16:17:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:09.986 16:17:14 -- common/autotest_common.sh@10 -- # set +x 00:40:10.245 16:17:14 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:10.503 [2024-07-22 16:17:14.706485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:10.503 [2024-07-22 16:17:14.706535] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:10.503 [2024-07-22 16:17:14.706660] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:10.503 [2024-07-22 16:17:14.706791] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:10.503 [2024-07-22 16:17:14.706810] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:40:10.503 16:17:14 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:10.503 16:17:14 -- bdev/bdev_raid.sh@671 -- # jq length 00:40:11.071 16:17:15 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:40:11.071 16:17:15 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:40:11.071 16:17:15 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@12 -- # local i 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:11.071 16:17:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:11.329 /dev/nbd0 00:40:11.329 16:17:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:11.329 16:17:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:11.329 16:17:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:40:11.329 16:17:15 -- common/autotest_common.sh@857 -- # local i 00:40:11.329 16:17:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:11.329 16:17:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:11.329 16:17:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:40:11.329 16:17:15 -- common/autotest_common.sh@861 -- # break 00:40:11.329 16:17:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:11.329 16:17:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:11.330 16:17:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:11.330 1+0 records in 00:40:11.330 1+0 records out 00:40:11.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348061 s, 11.8 MB/s 00:40:11.330 16:17:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:11.330 16:17:15 -- common/autotest_common.sh@874 -- # size=4096 00:40:11.330 16:17:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:11.330 16:17:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:11.330 16:17:15 -- common/autotest_common.sh@877 -- # return 0 00:40:11.330 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:11.330 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:11.330 16:17:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:40:11.588 /dev/nbd1 00:40:11.588 16:17:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:11.588 16:17:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:11.588 16:17:15 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:40:11.588 16:17:15 -- common/autotest_common.sh@857 -- # local i 00:40:11.588 16:17:15 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:11.588 16:17:15 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:11.588 16:17:15 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:40:11.588 16:17:15 -- common/autotest_common.sh@861 -- # break 00:40:11.588 16:17:15 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:11.588 16:17:15 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:11.588 16:17:15 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:11.588 1+0 records in 00:40:11.588 1+0 records out 00:40:11.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668766 s, 6.1 MB/s 00:40:11.588 16:17:15 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:11.588 16:17:15 -- common/autotest_common.sh@874 -- # size=4096 00:40:11.588 16:17:15 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:11.588 16:17:15 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:11.588 16:17:15 -- common/autotest_common.sh@877 -- # return 0 00:40:11.588 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:11.588 16:17:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:11.588 16:17:15 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:40:11.847 16:17:15 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@51 -- # local i 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:11.847 16:17:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:12.106 16:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:40:12.364 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:40:12.364 16:17:16 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:40:12.364 16:17:16 -- bdev/bdev_raid.sh@709 -- # killprocess 79465 00:40:12.364 16:17:16 -- common/autotest_common.sh@926 -- # '[' -z 79465 ']' 00:40:12.364 16:17:16 -- common/autotest_common.sh@930 -- # kill -0 79465 00:40:12.364 16:17:16 -- common/autotest_common.sh@931 -- # uname 00:40:12.364 16:17:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:12.364 16:17:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79465 00:40:12.364 killing process with pid 79465 00:40:12.364 Received shutdown signal, test time was about 60.000000 seconds 00:40:12.364 00:40:12.364 Latency(us) 00:40:12.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.364 =================================================================================================================== 00:40:12.364 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:12.364 16:17:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:12.364 16:17:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:12.364 16:17:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79465' 00:40:12.364 16:17:16 -- common/autotest_common.sh@945 -- # kill 79465 00:40:12.364 16:17:16 -- common/autotest_common.sh@950 -- # wait 79465 00:40:12.364 [2024-07-22 16:17:16.546593] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:12.622 [2024-07-22 16:17:16.825640] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:40:14.083 00:40:14.083 real 0m23.365s 00:40:14.083 user 0m29.492s 00:40:14.083 sys 0m5.097s 00:40:14.083 16:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:14.083 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:40:14.083 ************************************ 00:40:14.083 END TEST raid_rebuild_test 00:40:14.083 ************************************ 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:40:14.083 16:17:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:40:14.083 16:17:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:14.083 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:40:14.083 ************************************ 00:40:14.083 START TEST raid_rebuild_test_sb 00:40:14.083 ************************************ 00:40:14.083 16:17:18 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=79990 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79990 /var/tmp/spdk-raid.sock 00:40:14.083 16:17:18 -- common/autotest_common.sh@819 -- # '[' -z 79990 ']' 00:40:14.083 16:17:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:14.083 16:17:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:14.083 16:17:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:14.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:14.083 16:17:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:14.083 16:17:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:14.083 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:40:14.083 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:14.083 Zero copy mechanism will not be used. 00:40:14.083 [2024-07-22 16:17:18.305318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:40:14.083 [2024-07-22 16:17:18.305540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79990 ] 00:40:14.341 [2024-07-22 16:17:18.489557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.598 [2024-07-22 16:17:18.798044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.856 [2024-07-22 16:17:19.026273] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:15.114 16:17:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:15.114 16:17:19 -- common/autotest_common.sh@852 -- # return 0 00:40:15.114 16:17:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:15.114 16:17:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:40:15.114 16:17:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:40:15.372 BaseBdev1_malloc 00:40:15.372 16:17:19 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:15.629 [2024-07-22 16:17:19.805457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:15.629 [2024-07-22 16:17:19.805558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:15.630 [2024-07-22 16:17:19.805601] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:40:15.630 [2024-07-22 16:17:19.805621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:15.630 [2024-07-22 16:17:19.808365] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:15.630 [2024-07-22 16:17:19.808410] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:15.630 BaseBdev1 00:40:15.630 16:17:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:15.630 16:17:19 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:40:15.630 16:17:19 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:40:15.888 BaseBdev2_malloc 00:40:16.147 16:17:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:16.147 [2024-07-22 16:17:20.412721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:16.147 [2024-07-22 16:17:20.412843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:16.147 [2024-07-22 16:17:20.412889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:40:16.147 [2024-07-22 16:17:20.412914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:16.147 [2024-07-22 16:17:20.415665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:16.147 [2024-07-22 16:17:20.415718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:16.147 BaseBdev2 00:40:16.404 16:17:20 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:40:16.662 spare_malloc 00:40:16.662 16:17:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:16.662 spare_delay 00:40:16.662 16:17:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:17.228 [2024-07-22 16:17:21.203419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:17.228 [2024-07-22 16:17:21.203598] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:17.228 [2024-07-22 16:17:21.203644] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:40:17.228 [2024-07-22 16:17:21.203678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:17.228 [2024-07-22 16:17:21.206743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:17.228 [2024-07-22 16:17:21.206789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:17.228 spare 00:40:17.228 16:17:21 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:40:17.228 [2024-07-22 16:17:21.500190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:17.486 [2024-07-22 16:17:21.502939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:17.486 [2024-07-22 16:17:21.503257] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:40:17.486 [2024-07-22 16:17:21.503281] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:17.486 [2024-07-22 16:17:21.503470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:40:17.486 [2024-07-22 16:17:21.504005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:40:17.486 [2024-07-22 16:17:21.504032] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:40:17.486 [2024-07-22 16:17:21.504326] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:17.486 16:17:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:17.745 16:17:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:17.745 "name": "raid_bdev1", 00:40:17.745 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:17.745 "strip_size_kb": 0, 00:40:17.745 "state": "online", 00:40:17.745 "raid_level": "raid1", 00:40:17.745 "superblock": true, 00:40:17.745 "num_base_bdevs": 2, 00:40:17.745 "num_base_bdevs_discovered": 2, 00:40:17.745 "num_base_bdevs_operational": 2, 00:40:17.745 "base_bdevs_list": [ 00:40:17.745 { 00:40:17.745 "name": "BaseBdev1", 00:40:17.745 "uuid": "99133aa1-0f5c-55c0-af97-27d3dd38b09d", 00:40:17.745 "is_configured": true, 00:40:17.745 "data_offset": 2048, 00:40:17.745 "data_size": 63488 00:40:17.745 }, 00:40:17.745 { 00:40:17.745 "name": "BaseBdev2", 00:40:17.745 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:17.745 "is_configured": true, 00:40:17.745 "data_offset": 2048, 00:40:17.745 "data_size": 63488 00:40:17.745 } 00:40:17.745 ] 00:40:17.745 }' 00:40:17.745 16:17:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:17.745 16:17:21 -- common/autotest_common.sh@10 -- # set +x 00:40:18.004 16:17:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:18.004 16:17:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:40:18.263 [2024-07-22 16:17:22.489071] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:18.263 16:17:22 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:40:18.263 16:17:22 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:18.263 16:17:22 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:18.834 16:17:22 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:40:18.834 16:17:22 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:40:18.834 16:17:22 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:40:18.834 16:17:22 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@12 -- # local i 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:18.834 16:17:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:40:18.834 [2024-07-22 16:17:23.012928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:40:18.834 /dev/nbd0 00:40:18.834 16:17:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:18.834 16:17:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:18.834 16:17:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:40:18.834 16:17:23 -- common/autotest_common.sh@857 -- # local i 00:40:18.834 16:17:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:18.834 16:17:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:18.834 16:17:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:40:18.834 16:17:23 -- common/autotest_common.sh@861 -- # break 00:40:18.834 16:17:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:18.834 16:17:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:18.834 16:17:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:18.834 1+0 records in 00:40:18.834 1+0 records out 00:40:18.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225341 s, 18.2 MB/s 00:40:18.834 16:17:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.834 16:17:23 -- common/autotest_common.sh@874 -- # size=4096 00:40:18.834 16:17:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:18.834 16:17:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:18.834 16:17:23 -- common/autotest_common.sh@877 -- # return 0 00:40:18.834 16:17:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:18.834 16:17:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:18.834 16:17:23 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:40:18.834 16:17:23 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:40:18.834 16:17:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:40:25.401 63488+0 records in 00:40:25.401 63488+0 records out 00:40:25.401 32505856 bytes (33 MB, 31 MiB) copied, 6.57728 s, 4.9 MB/s 00:40:25.401 16:17:29 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@51 -- # local i 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:25.401 16:17:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:25.660 [2024-07-22 16:17:29.894905] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@41 -- # break 00:40:25.660 16:17:29 -- bdev/nbd_common.sh@45 -- # return 0 00:40:25.660 16:17:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:25.918 [2024-07-22 16:17:30.171477] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:26.176 16:17:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:26.177 16:17:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:26.436 16:17:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:26.436 "name": "raid_bdev1", 00:40:26.436 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:26.436 "strip_size_kb": 0, 00:40:26.436 "state": "online", 00:40:26.436 "raid_level": "raid1", 00:40:26.436 "superblock": true, 00:40:26.436 "num_base_bdevs": 2, 00:40:26.436 "num_base_bdevs_discovered": 1, 00:40:26.436 "num_base_bdevs_operational": 1, 00:40:26.436 "base_bdevs_list": [ 00:40:26.436 { 00:40:26.436 "name": null, 00:40:26.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:26.436 "is_configured": false, 00:40:26.436 "data_offset": 2048, 00:40:26.436 "data_size": 63488 00:40:26.436 }, 00:40:26.436 { 00:40:26.436 "name": "BaseBdev2", 00:40:26.436 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:26.436 "is_configured": true, 00:40:26.436 "data_offset": 2048, 00:40:26.436 "data_size": 63488 00:40:26.436 } 00:40:26.436 ] 00:40:26.436 }' 00:40:26.436 16:17:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:26.436 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:40:26.695 16:17:30 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:26.953 [2024-07-22 16:17:31.039921] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:26.953 [2024-07-22 16:17:31.040026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:26.953 [2024-07-22 16:17:31.057189] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2c10 00:40:26.953 [2024-07-22 16:17:31.060071] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:26.953 16:17:31 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:27.915 16:17:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:28.173 "name": "raid_bdev1", 00:40:28.173 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:28.173 "strip_size_kb": 0, 00:40:28.173 "state": "online", 00:40:28.173 "raid_level": "raid1", 00:40:28.173 "superblock": true, 00:40:28.173 "num_base_bdevs": 2, 00:40:28.173 "num_base_bdevs_discovered": 2, 00:40:28.173 "num_base_bdevs_operational": 2, 00:40:28.173 "process": { 00:40:28.173 "type": "rebuild", 00:40:28.173 "target": "spare", 00:40:28.173 "progress": { 00:40:28.173 "blocks": 24576, 00:40:28.173 "percent": 38 00:40:28.173 } 00:40:28.173 }, 00:40:28.173 "base_bdevs_list": [ 00:40:28.173 { 00:40:28.173 "name": "spare", 00:40:28.173 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:28.173 "is_configured": true, 00:40:28.173 "data_offset": 2048, 00:40:28.173 "data_size": 63488 00:40:28.173 }, 00:40:28.173 { 00:40:28.173 "name": "BaseBdev2", 00:40:28.173 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:28.173 "is_configured": true, 00:40:28.173 "data_offset": 2048, 00:40:28.173 "data_size": 63488 00:40:28.173 } 00:40:28.173 ] 00:40:28.173 }' 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:28.173 16:17:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:28.432 [2024-07-22 16:17:32.615305] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:28.432 [2024-07-22 16:17:32.675261] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:28.432 [2024-07-22 16:17:32.675374] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:28.691 16:17:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:28.951 16:17:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:28.951 "name": "raid_bdev1", 00:40:28.951 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:28.951 "strip_size_kb": 0, 00:40:28.951 "state": "online", 00:40:28.951 "raid_level": "raid1", 00:40:28.951 "superblock": true, 00:40:28.951 "num_base_bdevs": 2, 00:40:28.951 "num_base_bdevs_discovered": 1, 00:40:28.951 "num_base_bdevs_operational": 1, 00:40:28.951 "base_bdevs_list": [ 00:40:28.951 { 00:40:28.951 "name": null, 00:40:28.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:28.951 "is_configured": false, 00:40:28.951 "data_offset": 2048, 00:40:28.951 "data_size": 63488 00:40:28.951 }, 00:40:28.951 { 00:40:28.951 "name": "BaseBdev2", 00:40:28.951 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:28.951 "is_configured": true, 00:40:28.951 "data_offset": 2048, 00:40:28.951 "data_size": 63488 00:40:28.951 } 00:40:28.951 ] 00:40:28.951 }' 00:40:28.951 16:17:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:28.951 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:29.217 16:17:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:29.476 "name": "raid_bdev1", 00:40:29.476 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:29.476 "strip_size_kb": 0, 00:40:29.476 "state": "online", 00:40:29.476 "raid_level": "raid1", 00:40:29.476 "superblock": true, 00:40:29.476 "num_base_bdevs": 2, 00:40:29.476 "num_base_bdevs_discovered": 1, 00:40:29.476 "num_base_bdevs_operational": 1, 00:40:29.476 "base_bdevs_list": [ 00:40:29.476 { 00:40:29.476 "name": null, 00:40:29.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:29.476 "is_configured": false, 00:40:29.476 "data_offset": 2048, 00:40:29.476 "data_size": 63488 00:40:29.476 }, 00:40:29.476 { 00:40:29.476 "name": "BaseBdev2", 00:40:29.476 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:29.476 "is_configured": true, 00:40:29.476 "data_offset": 2048, 00:40:29.476 "data_size": 63488 00:40:29.476 } 00:40:29.476 ] 00:40:29.476 }' 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:29.476 16:17:33 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:29.735 [2024-07-22 16:17:33.929406] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:29.735 [2024-07-22 16:17:33.929546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:29.735 [2024-07-22 16:17:33.946152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2ce0 00:40:29.735 [2024-07-22 16:17:33.949191] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:29.735 16:17:33 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:31.116 16:17:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:31.116 "name": "raid_bdev1", 00:40:31.116 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:31.116 "strip_size_kb": 0, 00:40:31.116 "state": "online", 00:40:31.116 "raid_level": "raid1", 00:40:31.116 "superblock": true, 00:40:31.116 "num_base_bdevs": 2, 00:40:31.116 "num_base_bdevs_discovered": 2, 00:40:31.116 "num_base_bdevs_operational": 2, 00:40:31.116 "process": { 00:40:31.116 "type": "rebuild", 00:40:31.116 "target": "spare", 00:40:31.116 "progress": { 00:40:31.116 "blocks": 24576, 00:40:31.116 "percent": 38 00:40:31.116 } 00:40:31.116 }, 00:40:31.116 "base_bdevs_list": [ 00:40:31.116 { 00:40:31.116 "name": "spare", 00:40:31.116 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:31.116 "is_configured": true, 00:40:31.116 "data_offset": 2048, 00:40:31.116 "data_size": 63488 00:40:31.116 }, 00:40:31.116 { 00:40:31.116 "name": "BaseBdev2", 00:40:31.116 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:31.116 "is_configured": true, 00:40:31.116 "data_offset": 2048, 00:40:31.116 "data_size": 63488 00:40:31.116 } 00:40:31.116 ] 00:40:31.116 }' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:40:31.116 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@657 -- # local timeout=420 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:31.116 16:17:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:31.375 "name": "raid_bdev1", 00:40:31.375 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:31.375 "strip_size_kb": 0, 00:40:31.375 "state": "online", 00:40:31.375 "raid_level": "raid1", 00:40:31.375 "superblock": true, 00:40:31.375 "num_base_bdevs": 2, 00:40:31.375 "num_base_bdevs_discovered": 2, 00:40:31.375 "num_base_bdevs_operational": 2, 00:40:31.375 "process": { 00:40:31.375 "type": "rebuild", 00:40:31.375 "target": "spare", 00:40:31.375 "progress": { 00:40:31.375 "blocks": 30720, 00:40:31.375 "percent": 48 00:40:31.375 } 00:40:31.375 }, 00:40:31.375 "base_bdevs_list": [ 00:40:31.375 { 00:40:31.375 "name": "spare", 00:40:31.375 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:31.375 "is_configured": true, 00:40:31.375 "data_offset": 2048, 00:40:31.375 "data_size": 63488 00:40:31.375 }, 00:40:31.375 { 00:40:31.375 "name": "BaseBdev2", 00:40:31.375 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:31.375 "is_configured": true, 00:40:31.375 "data_offset": 2048, 00:40:31.375 "data_size": 63488 00:40:31.375 } 00:40:31.375 ] 00:40:31.375 }' 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:31.375 16:17:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:32.755 "name": "raid_bdev1", 00:40:32.755 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:32.755 "strip_size_kb": 0, 00:40:32.755 "state": "online", 00:40:32.755 "raid_level": "raid1", 00:40:32.755 "superblock": true, 00:40:32.755 "num_base_bdevs": 2, 00:40:32.755 "num_base_bdevs_discovered": 2, 00:40:32.755 "num_base_bdevs_operational": 2, 00:40:32.755 "process": { 00:40:32.755 "type": "rebuild", 00:40:32.755 "target": "spare", 00:40:32.755 "progress": { 00:40:32.755 "blocks": 59392, 00:40:32.755 "percent": 93 00:40:32.755 } 00:40:32.755 }, 00:40:32.755 "base_bdevs_list": [ 00:40:32.755 { 00:40:32.755 "name": "spare", 00:40:32.755 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:32.755 "is_configured": true, 00:40:32.755 "data_offset": 2048, 00:40:32.755 "data_size": 63488 00:40:32.755 }, 00:40:32.755 { 00:40:32.755 "name": "BaseBdev2", 00:40:32.755 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:32.755 "is_configured": true, 00:40:32.755 "data_offset": 2048, 00:40:32.755 "data_size": 63488 00:40:32.755 } 00:40:32.755 ] 00:40:32.755 }' 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:32.755 16:17:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:33.012 [2024-07-22 16:17:37.077890] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:33.012 [2024-07-22 16:17:37.078048] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:33.012 [2024-07-22 16:17:37.078232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:33.946 16:17:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:34.204 "name": "raid_bdev1", 00:40:34.204 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:34.204 "strip_size_kb": 0, 00:40:34.204 "state": "online", 00:40:34.204 "raid_level": "raid1", 00:40:34.204 "superblock": true, 00:40:34.204 "num_base_bdevs": 2, 00:40:34.204 "num_base_bdevs_discovered": 2, 00:40:34.204 "num_base_bdevs_operational": 2, 00:40:34.204 "base_bdevs_list": [ 00:40:34.204 { 00:40:34.204 "name": "spare", 00:40:34.204 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:34.204 "is_configured": true, 00:40:34.204 "data_offset": 2048, 00:40:34.204 "data_size": 63488 00:40:34.204 }, 00:40:34.204 { 00:40:34.204 "name": "BaseBdev2", 00:40:34.204 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:34.204 "is_configured": true, 00:40:34.204 "data_offset": 2048, 00:40:34.204 "data_size": 63488 00:40:34.204 } 00:40:34.204 ] 00:40:34.204 }' 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@660 -- # break 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:34.204 16:17:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:34.462 16:17:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:34.462 "name": "raid_bdev1", 00:40:34.462 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:34.462 "strip_size_kb": 0, 00:40:34.462 "state": "online", 00:40:34.462 "raid_level": "raid1", 00:40:34.462 "superblock": true, 00:40:34.462 "num_base_bdevs": 2, 00:40:34.462 "num_base_bdevs_discovered": 2, 00:40:34.462 "num_base_bdevs_operational": 2, 00:40:34.462 "base_bdevs_list": [ 00:40:34.462 { 00:40:34.462 "name": "spare", 00:40:34.462 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:34.462 "is_configured": true, 00:40:34.462 "data_offset": 2048, 00:40:34.462 "data_size": 63488 00:40:34.462 }, 00:40:34.462 { 00:40:34.462 "name": "BaseBdev2", 00:40:34.463 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:34.463 "is_configured": true, 00:40:34.463 "data_offset": 2048, 00:40:34.463 "data_size": 63488 00:40:34.463 } 00:40:34.463 ] 00:40:34.463 }' 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:34.463 16:17:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:34.720 16:17:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:34.720 "name": "raid_bdev1", 00:40:34.720 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:34.720 "strip_size_kb": 0, 00:40:34.720 "state": "online", 00:40:34.720 "raid_level": "raid1", 00:40:34.720 "superblock": true, 00:40:34.720 "num_base_bdevs": 2, 00:40:34.720 "num_base_bdevs_discovered": 2, 00:40:34.720 "num_base_bdevs_operational": 2, 00:40:34.720 "base_bdevs_list": [ 00:40:34.720 { 00:40:34.720 "name": "spare", 00:40:34.720 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:34.721 "is_configured": true, 00:40:34.721 "data_offset": 2048, 00:40:34.721 "data_size": 63488 00:40:34.721 }, 00:40:34.721 { 00:40:34.721 "name": "BaseBdev2", 00:40:34.721 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:34.721 "is_configured": true, 00:40:34.721 "data_offset": 2048, 00:40:34.721 "data_size": 63488 00:40:34.721 } 00:40:34.721 ] 00:40:34.721 }' 00:40:34.721 16:17:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:34.721 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:40:34.978 16:17:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:35.236 [2024-07-22 16:17:39.422575] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:35.236 [2024-07-22 16:17:39.422674] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:35.236 [2024-07-22 16:17:39.422904] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:35.236 [2024-07-22 16:17:39.423149] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:35.236 [2024-07-22 16:17:39.423186] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:40:35.236 16:17:39 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:35.236 16:17:39 -- bdev/bdev_raid.sh@671 -- # jq length 00:40:35.495 16:17:39 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:40:35.495 16:17:39 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:40:35.495 16:17:39 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@12 -- # local i 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:35.495 16:17:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:40:35.753 /dev/nbd0 00:40:35.753 16:17:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:35.753 16:17:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:35.753 16:17:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:40:35.753 16:17:39 -- common/autotest_common.sh@857 -- # local i 00:40:35.753 16:17:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:35.753 16:17:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:35.753 16:17:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:40:35.753 16:17:39 -- common/autotest_common.sh@861 -- # break 00:40:35.753 16:17:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:35.753 16:17:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:35.753 16:17:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:35.753 1+0 records in 00:40:35.753 1+0 records out 00:40:35.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362096 s, 11.3 MB/s 00:40:35.753 16:17:39 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:35.753 16:17:40 -- common/autotest_common.sh@874 -- # size=4096 00:40:35.753 16:17:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:35.753 16:17:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:35.753 16:17:40 -- common/autotest_common.sh@877 -- # return 0 00:40:35.753 16:17:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:35.753 16:17:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:35.753 16:17:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:40:36.054 /dev/nbd1 00:40:36.054 16:17:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:36.054 16:17:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:36.054 16:17:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:40:36.054 16:17:40 -- common/autotest_common.sh@857 -- # local i 00:40:36.054 16:17:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:36.054 16:17:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:36.054 16:17:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:40:36.054 16:17:40 -- common/autotest_common.sh@861 -- # break 00:40:36.054 16:17:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:36.054 16:17:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:36.054 16:17:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:36.054 1+0 records in 00:40:36.054 1+0 records out 00:40:36.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392613 s, 10.4 MB/s 00:40:36.054 16:17:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:36.054 16:17:40 -- common/autotest_common.sh@874 -- # size=4096 00:40:36.054 16:17:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:36.054 16:17:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:36.054 16:17:40 -- common/autotest_common.sh@877 -- # return 0 00:40:36.054 16:17:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:36.054 16:17:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:40:36.054 16:17:40 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:40:36.311 16:17:40 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@51 -- # local i 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:36.311 16:17:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@41 -- # break 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@45 -- # return 0 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:36.568 16:17:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@41 -- # break 00:40:36.826 16:17:41 -- bdev/nbd_common.sh@45 -- # return 0 00:40:36.826 16:17:41 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:40:36.826 16:17:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:40:36.826 16:17:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:40:36.826 16:17:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:40:37.084 16:17:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:40:37.340 [2024-07-22 16:17:41.570533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:40:37.341 [2024-07-22 16:17:41.570655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:37.341 [2024-07-22 16:17:41.570699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:40:37.341 [2024-07-22 16:17:41.570716] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:37.341 [2024-07-22 16:17:41.573605] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:37.341 [2024-07-22 16:17:41.573651] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:40:37.341 [2024-07-22 16:17:41.573773] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:40:37.341 [2024-07-22 16:17:41.573855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:37.341 BaseBdev1 00:40:37.341 16:17:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:40:37.341 16:17:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:40:37.341 16:17:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:40:37.597 16:17:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:40:38.162 [2024-07-22 16:17:42.146774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:40:38.162 [2024-07-22 16:17:42.146894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:38.162 [2024-07-22 16:17:42.146941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:40:38.162 [2024-07-22 16:17:42.146959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:38.162 [2024-07-22 16:17:42.147589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:38.162 [2024-07-22 16:17:42.147626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:40:38.162 [2024-07-22 16:17:42.147766] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:40:38.162 [2024-07-22 16:17:42.147814] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:40:38.162 [2024-07-22 16:17:42.147834] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:38.162 [2024-07-22 16:17:42.147869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state configuring 00:40:38.162 [2024-07-22 16:17:42.147965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:38.162 BaseBdev2 00:40:38.162 16:17:42 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:40:38.162 16:17:42 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:38.420 [2024-07-22 16:17:42.654959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:38.420 [2024-07-22 16:17:42.655140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:38.420 [2024-07-22 16:17:42.655197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:40:38.420 [2024-07-22 16:17:42.655217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:38.420 [2024-07-22 16:17:42.655882] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:38.420 [2024-07-22 16:17:42.655929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:38.420 [2024-07-22 16:17:42.656082] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:40:38.420 [2024-07-22 16:17:42.656125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:38.420 spare 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:38.420 16:17:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:38.678 [2024-07-22 16:17:42.756287] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:40:38.678 [2024-07-22 16:17:42.756414] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:40:38.678 [2024-07-22 16:17:42.756642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1390 00:40:38.678 [2024-07-22 16:17:42.757233] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:40:38.678 [2024-07-22 16:17:42.757267] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:40:38.678 [2024-07-22 16:17:42.757511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:38.936 16:17:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:38.936 "name": "raid_bdev1", 00:40:38.936 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:38.936 "strip_size_kb": 0, 00:40:38.936 "state": "online", 00:40:38.936 "raid_level": "raid1", 00:40:38.936 "superblock": true, 00:40:38.936 "num_base_bdevs": 2, 00:40:38.936 "num_base_bdevs_discovered": 2, 00:40:38.936 "num_base_bdevs_operational": 2, 00:40:38.936 "base_bdevs_list": [ 00:40:38.936 { 00:40:38.936 "name": "spare", 00:40:38.936 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:38.936 "is_configured": true, 00:40:38.936 "data_offset": 2048, 00:40:38.936 "data_size": 63488 00:40:38.936 }, 00:40:38.936 { 00:40:38.936 "name": "BaseBdev2", 00:40:38.936 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:38.936 "is_configured": true, 00:40:38.936 "data_offset": 2048, 00:40:38.936 "data_size": 63488 00:40:38.936 } 00:40:38.936 ] 00:40:38.936 }' 00:40:38.936 16:17:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:38.936 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.194 16:17:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:39.452 "name": "raid_bdev1", 00:40:39.452 "uuid": "b4eecd4d-ec50-4ee0-9ab6-c5a0f1a339f6", 00:40:39.452 "strip_size_kb": 0, 00:40:39.452 "state": "online", 00:40:39.452 "raid_level": "raid1", 00:40:39.452 "superblock": true, 00:40:39.452 "num_base_bdevs": 2, 00:40:39.452 "num_base_bdevs_discovered": 2, 00:40:39.452 "num_base_bdevs_operational": 2, 00:40:39.452 "base_bdevs_list": [ 00:40:39.452 { 00:40:39.452 "name": "spare", 00:40:39.452 "uuid": "87e90077-ba3b-53ae-ab3a-4454f422c575", 00:40:39.452 "is_configured": true, 00:40:39.452 "data_offset": 2048, 00:40:39.452 "data_size": 63488 00:40:39.452 }, 00:40:39.452 { 00:40:39.452 "name": "BaseBdev2", 00:40:39.452 "uuid": "3ae3f334-d2fe-5ca3-bcf9-04a68eb9f411", 00:40:39.452 "is_configured": true, 00:40:39.452 "data_offset": 2048, 00:40:39.452 "data_size": 63488 00:40:39.452 } 00:40:39.452 ] 00:40:39.452 }' 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:39.452 16:17:43 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:40:39.712 16:17:43 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:40:39.712 16:17:43 -- bdev/bdev_raid.sh@709 -- # killprocess 79990 00:40:39.712 16:17:43 -- common/autotest_common.sh@926 -- # '[' -z 79990 ']' 00:40:39.712 16:17:43 -- common/autotest_common.sh@930 -- # kill -0 79990 00:40:39.712 16:17:43 -- common/autotest_common.sh@931 -- # uname 00:40:39.712 16:17:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:39.712 16:17:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79990 00:40:39.712 16:17:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:39.712 16:17:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:39.712 killing process with pid 79990 00:40:39.712 16:17:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79990' 00:40:39.712 Received shutdown signal, test time was about 60.000000 seconds 00:40:39.712 00:40:39.712 Latency(us) 00:40:39.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:39.712 =================================================================================================================== 00:40:39.712 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:39.712 16:17:43 -- common/autotest_common.sh@945 -- # kill 79990 00:40:39.712 [2024-07-22 16:17:43.904240] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:39.712 [2024-07-22 16:17:43.904362] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:39.712 [2024-07-22 16:17:43.904450] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:40:39.712 [2024-07-22 16:17:43.904486] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:40:39.712 16:17:43 -- common/autotest_common.sh@950 -- # wait 79990 00:40:39.969 [2024-07-22 16:17:44.203751] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:41.876 16:17:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:40:41.876 00:40:41.876 real 0m27.420s 00:40:41.876 user 0m36.288s 00:40:41.876 sys 0m5.284s 00:40:41.876 16:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.876 16:17:45 -- common/autotest_common.sh@10 -- # set +x 00:40:41.876 ************************************ 00:40:41.876 END TEST raid_rebuild_test_sb 00:40:41.876 ************************************ 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:40:41.877 16:17:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:40:41.877 16:17:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:41.877 16:17:45 -- common/autotest_common.sh@10 -- # set +x 00:40:41.877 ************************************ 00:40:41.877 START TEST raid_rebuild_test_io 00:40:41.877 ************************************ 00:40:41.877 16:17:45 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=80598 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80598 /var/tmp/spdk-raid.sock 00:40:41.877 16:17:45 -- common/autotest_common.sh@819 -- # '[' -z 80598 ']' 00:40:41.877 16:17:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:41.877 16:17:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:41.877 16:17:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:41.877 16:17:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:41.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:41.877 16:17:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:41.877 16:17:45 -- common/autotest_common.sh@10 -- # set +x 00:40:41.877 [2024-07-22 16:17:45.777964] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:40:41.877 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:41.877 Zero copy mechanism will not be used. 00:40:41.877 [2024-07-22 16:17:45.778232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80598 ] 00:40:41.877 [2024-07-22 16:17:45.954125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.135 [2024-07-22 16:17:46.266143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:42.393 [2024-07-22 16:17:46.498184] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:40:42.651 16:17:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:40:42.651 16:17:46 -- common/autotest_common.sh@852 -- # return 0 00:40:42.651 16:17:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:42.651 16:17:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:40:42.651 16:17:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:40:42.909 BaseBdev1 00:40:42.909 16:17:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:40:42.909 16:17:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:40:42.909 16:17:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:40:43.166 BaseBdev2 00:40:43.166 16:17:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:40:43.424 spare_malloc 00:40:43.424 16:17:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:40:43.683 spare_delay 00:40:43.683 16:17:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:40:43.972 [2024-07-22 16:17:48.046869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:40:43.972 [2024-07-22 16:17:48.047039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:40:43.972 [2024-07-22 16:17:48.047115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:40:43.972 [2024-07-22 16:17:48.047151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:40:43.972 [2024-07-22 16:17:48.050865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:40:43.972 [2024-07-22 16:17:48.050928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:40:43.972 spare 00:40:43.972 16:17:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:40:44.230 [2024-07-22 16:17:48.315466] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:40:44.230 [2024-07-22 16:17:48.318345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:40:44.230 [2024-07-22 16:17:48.318454] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:40:44.230 [2024-07-22 16:17:48.318476] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:40:44.230 [2024-07-22 16:17:48.318684] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:40:44.230 [2024-07-22 16:17:48.319235] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:40:44.230 [2024-07-22 16:17:48.319267] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:40:44.230 [2024-07-22 16:17:48.319574] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:44.230 16:17:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:44.488 16:17:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:44.488 "name": "raid_bdev1", 00:40:44.488 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:44.488 "strip_size_kb": 0, 00:40:44.488 "state": "online", 00:40:44.488 "raid_level": "raid1", 00:40:44.488 "superblock": false, 00:40:44.488 "num_base_bdevs": 2, 00:40:44.488 "num_base_bdevs_discovered": 2, 00:40:44.488 "num_base_bdevs_operational": 2, 00:40:44.488 "base_bdevs_list": [ 00:40:44.488 { 00:40:44.488 "name": "BaseBdev1", 00:40:44.488 "uuid": "0da7f44e-497d-4e8a-b37b-2b68bb99700e", 00:40:44.488 "is_configured": true, 00:40:44.488 "data_offset": 0, 00:40:44.488 "data_size": 65536 00:40:44.488 }, 00:40:44.488 { 00:40:44.488 "name": "BaseBdev2", 00:40:44.488 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:44.488 "is_configured": true, 00:40:44.488 "data_offset": 0, 00:40:44.488 "data_size": 65536 00:40:44.488 } 00:40:44.488 ] 00:40:44.488 }' 00:40:44.488 16:17:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:44.488 16:17:48 -- common/autotest_common.sh@10 -- # set +x 00:40:44.746 16:17:48 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:40:44.746 16:17:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:40:45.005 [2024-07-22 16:17:49.244374] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:40:45.005 16:17:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:40:45.005 16:17:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:45.005 16:17:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:40:45.264 16:17:49 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:40:45.264 16:17:49 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:40:45.264 16:17:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:40:45.264 16:17:49 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:40:45.522 [2024-07-22 16:17:49.645397] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:40:45.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:45.522 Zero copy mechanism will not be used. 00:40:45.522 Running I/O for 60 seconds... 00:40:45.522 [2024-07-22 16:17:49.789328] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:40:45.522 [2024-07-22 16:17:49.789858] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:40:45.780 16:17:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:45.781 16:17:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:46.039 16:17:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:46.039 "name": "raid_bdev1", 00:40:46.039 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:46.039 "strip_size_kb": 0, 00:40:46.039 "state": "online", 00:40:46.039 "raid_level": "raid1", 00:40:46.039 "superblock": false, 00:40:46.039 "num_base_bdevs": 2, 00:40:46.039 "num_base_bdevs_discovered": 1, 00:40:46.039 "num_base_bdevs_operational": 1, 00:40:46.039 "base_bdevs_list": [ 00:40:46.039 { 00:40:46.039 "name": null, 00:40:46.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:46.039 "is_configured": false, 00:40:46.039 "data_offset": 0, 00:40:46.039 "data_size": 65536 00:40:46.039 }, 00:40:46.039 { 00:40:46.039 "name": "BaseBdev2", 00:40:46.039 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:46.039 "is_configured": true, 00:40:46.039 "data_offset": 0, 00:40:46.039 "data_size": 65536 00:40:46.039 } 00:40:46.039 ] 00:40:46.039 }' 00:40:46.039 16:17:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:46.039 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:40:46.297 16:17:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:46.555 [2024-07-22 16:17:50.750760] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:46.555 [2024-07-22 16:17:50.750839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:46.555 [2024-07-22 16:17:50.809964] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:40:46.555 [2024-07-22 16:17:50.812665] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:46.555 16:17:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:40:46.812 [2024-07-22 16:17:50.958227] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:40:47.070 [2024-07-22 16:17:51.089713] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:40:47.070 [2024-07-22 16:17:51.090226] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:40:47.070 [2024-07-22 16:17:51.337394] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:40:47.636 [2024-07-22 16:17:51.707915] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:40:47.636 [2024-07-22 16:17:51.708712] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:47.636 16:17:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:47.894 [2024-07-22 16:17:51.922471] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:47.894 "name": "raid_bdev1", 00:40:47.894 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:47.894 "strip_size_kb": 0, 00:40:47.894 "state": "online", 00:40:47.894 "raid_level": "raid1", 00:40:47.894 "superblock": false, 00:40:47.894 "num_base_bdevs": 2, 00:40:47.894 "num_base_bdevs_discovered": 2, 00:40:47.894 "num_base_bdevs_operational": 2, 00:40:47.894 "process": { 00:40:47.894 "type": "rebuild", 00:40:47.894 "target": "spare", 00:40:47.894 "progress": { 00:40:47.894 "blocks": 16384, 00:40:47.894 "percent": 25 00:40:47.894 } 00:40:47.894 }, 00:40:47.894 "base_bdevs_list": [ 00:40:47.894 { 00:40:47.894 "name": "spare", 00:40:47.894 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:47.894 "is_configured": true, 00:40:47.894 "data_offset": 0, 00:40:47.894 "data_size": 65536 00:40:47.894 }, 00:40:47.894 { 00:40:47.894 "name": "BaseBdev2", 00:40:47.894 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:47.894 "is_configured": true, 00:40:47.894 "data_offset": 0, 00:40:47.894 "data_size": 65536 00:40:47.894 } 00:40:47.894 ] 00:40:47.894 }' 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:47.894 16:17:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:40:48.154 [2024-07-22 16:17:52.287175] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:40:48.155 [2024-07-22 16:17:52.351420] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:40:48.414 [2024-07-22 16:17:52.524331] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:40:48.414 [2024-07-22 16:17:52.536545] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:48.414 [2024-07-22 16:17:52.571668] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:48.414 16:17:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:48.672 16:17:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:48.672 "name": "raid_bdev1", 00:40:48.672 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:48.672 "strip_size_kb": 0, 00:40:48.672 "state": "online", 00:40:48.672 "raid_level": "raid1", 00:40:48.672 "superblock": false, 00:40:48.672 "num_base_bdevs": 2, 00:40:48.672 "num_base_bdevs_discovered": 1, 00:40:48.672 "num_base_bdevs_operational": 1, 00:40:48.672 "base_bdevs_list": [ 00:40:48.672 { 00:40:48.672 "name": null, 00:40:48.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:48.672 "is_configured": false, 00:40:48.672 "data_offset": 0, 00:40:48.672 "data_size": 65536 00:40:48.672 }, 00:40:48.672 { 00:40:48.672 "name": "BaseBdev2", 00:40:48.672 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:48.672 "is_configured": true, 00:40:48.672 "data_offset": 0, 00:40:48.672 "data_size": 65536 00:40:48.672 } 00:40:48.672 ] 00:40:48.672 }' 00:40:48.672 16:17:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:48.672 16:17:52 -- common/autotest_common.sh@10 -- # set +x 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:49.242 16:17:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:49.508 "name": "raid_bdev1", 00:40:49.508 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:49.508 "strip_size_kb": 0, 00:40:49.508 "state": "online", 00:40:49.508 "raid_level": "raid1", 00:40:49.508 "superblock": false, 00:40:49.508 "num_base_bdevs": 2, 00:40:49.508 "num_base_bdevs_discovered": 1, 00:40:49.508 "num_base_bdevs_operational": 1, 00:40:49.508 "base_bdevs_list": [ 00:40:49.508 { 00:40:49.508 "name": null, 00:40:49.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:40:49.508 "is_configured": false, 00:40:49.508 "data_offset": 0, 00:40:49.508 "data_size": 65536 00:40:49.508 }, 00:40:49.508 { 00:40:49.508 "name": "BaseBdev2", 00:40:49.508 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:49.508 "is_configured": true, 00:40:49.508 "data_offset": 0, 00:40:49.508 "data_size": 65536 00:40:49.508 } 00:40:49.508 ] 00:40:49.508 }' 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:49.508 16:17:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:40:49.795 [2024-07-22 16:17:53.873796] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:40:49.795 [2024-07-22 16:17:53.873863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:40:49.795 16:17:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:40:49.795 [2024-07-22 16:17:53.943357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:40:49.795 [2024-07-22 16:17:53.946019] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:40:50.054 [2024-07-22 16:17:54.074255] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:40:50.054 [2024-07-22 16:17:54.074980] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:40:50.054 [2024-07-22 16:17:54.203910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:40:50.054 [2024-07-22 16:17:54.204342] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:40:50.313 [2024-07-22 16:17:54.556429] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:40:50.572 [2024-07-22 16:17:54.780821] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:50.830 16:17:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:50.830 [2024-07-22 16:17:55.054094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:40:51.088 [2024-07-22 16:17:55.166591] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:40:51.088 [2024-07-22 16:17:55.167312] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:51.088 "name": "raid_bdev1", 00:40:51.088 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:51.088 "strip_size_kb": 0, 00:40:51.088 "state": "online", 00:40:51.088 "raid_level": "raid1", 00:40:51.088 "superblock": false, 00:40:51.088 "num_base_bdevs": 2, 00:40:51.088 "num_base_bdevs_discovered": 2, 00:40:51.088 "num_base_bdevs_operational": 2, 00:40:51.088 "process": { 00:40:51.088 "type": "rebuild", 00:40:51.088 "target": "spare", 00:40:51.088 "progress": { 00:40:51.088 "blocks": 16384, 00:40:51.088 "percent": 25 00:40:51.088 } 00:40:51.088 }, 00:40:51.088 "base_bdevs_list": [ 00:40:51.088 { 00:40:51.088 "name": "spare", 00:40:51.088 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:51.088 "is_configured": true, 00:40:51.088 "data_offset": 0, 00:40:51.088 "data_size": 65536 00:40:51.088 }, 00:40:51.088 { 00:40:51.088 "name": "BaseBdev2", 00:40:51.088 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:51.088 "is_configured": true, 00:40:51.088 "data_offset": 0, 00:40:51.088 "data_size": 65536 00:40:51.088 } 00:40:51.088 ] 00:40:51.088 }' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@657 -- # local timeout=440 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:51.088 16:17:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:51.346 [2024-07-22 16:17:55.424256] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:40:51.346 [2024-07-22 16:17:55.425078] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:51.346 "name": "raid_bdev1", 00:40:51.346 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:51.346 "strip_size_kb": 0, 00:40:51.346 "state": "online", 00:40:51.346 "raid_level": "raid1", 00:40:51.346 "superblock": false, 00:40:51.346 "num_base_bdevs": 2, 00:40:51.346 "num_base_bdevs_discovered": 2, 00:40:51.346 "num_base_bdevs_operational": 2, 00:40:51.346 "process": { 00:40:51.346 "type": "rebuild", 00:40:51.346 "target": "spare", 00:40:51.346 "progress": { 00:40:51.346 "blocks": 20480, 00:40:51.346 "percent": 31 00:40:51.346 } 00:40:51.346 }, 00:40:51.346 "base_bdevs_list": [ 00:40:51.346 { 00:40:51.346 "name": "spare", 00:40:51.346 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:51.346 "is_configured": true, 00:40:51.346 "data_offset": 0, 00:40:51.346 "data_size": 65536 00:40:51.346 }, 00:40:51.346 { 00:40:51.346 "name": "BaseBdev2", 00:40:51.346 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:51.346 "is_configured": true, 00:40:51.346 "data_offset": 0, 00:40:51.346 "data_size": 65536 00:40:51.346 } 00:40:51.346 ] 00:40:51.346 }' 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:51.346 16:17:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:51.604 [2024-07-22 16:17:55.654557] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:40:51.604 [2024-07-22 16:17:55.673050] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:40:51.862 [2024-07-22 16:17:56.013933] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:40:52.127 [2024-07-22 16:17:56.134113] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:52.394 16:17:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:52.655 [2024-07-22 16:17:56.679096] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:40:52.655 [2024-07-22 16:17:56.799133] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:52.655 "name": "raid_bdev1", 00:40:52.655 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:52.655 "strip_size_kb": 0, 00:40:52.655 "state": "online", 00:40:52.655 "raid_level": "raid1", 00:40:52.655 "superblock": false, 00:40:52.655 "num_base_bdevs": 2, 00:40:52.655 "num_base_bdevs_discovered": 2, 00:40:52.655 "num_base_bdevs_operational": 2, 00:40:52.655 "process": { 00:40:52.655 "type": "rebuild", 00:40:52.655 "target": "spare", 00:40:52.655 "progress": { 00:40:52.655 "blocks": 40960, 00:40:52.655 "percent": 62 00:40:52.655 } 00:40:52.655 }, 00:40:52.655 "base_bdevs_list": [ 00:40:52.655 { 00:40:52.655 "name": "spare", 00:40:52.655 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:52.655 "is_configured": true, 00:40:52.655 "data_offset": 0, 00:40:52.655 "data_size": 65536 00:40:52.655 }, 00:40:52.655 { 00:40:52.655 "name": "BaseBdev2", 00:40:52.655 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:52.655 "is_configured": true, 00:40:52.655 "data_offset": 0, 00:40:52.655 "data_size": 65536 00:40:52.655 } 00:40:52.655 ] 00:40:52.655 }' 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:52.655 16:17:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:53.221 [2024-07-22 16:17:57.472667] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:40:53.478 [2024-07-22 16:17:57.694395] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:40:53.736 [2024-07-22 16:17:57.804317] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:53.736 16:17:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:53.994 [2024-07-22 16:17:58.147743] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:53.994 "name": "raid_bdev1", 00:40:53.994 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:53.994 "strip_size_kb": 0, 00:40:53.994 "state": "online", 00:40:53.994 "raid_level": "raid1", 00:40:53.994 "superblock": false, 00:40:53.994 "num_base_bdevs": 2, 00:40:53.994 "num_base_bdevs_discovered": 2, 00:40:53.994 "num_base_bdevs_operational": 2, 00:40:53.994 "process": { 00:40:53.994 "type": "rebuild", 00:40:53.994 "target": "spare", 00:40:53.994 "progress": { 00:40:53.994 "blocks": 65536, 00:40:53.994 "percent": 100 00:40:53.994 } 00:40:53.994 }, 00:40:53.994 "base_bdevs_list": [ 00:40:53.994 { 00:40:53.994 "name": "spare", 00:40:53.994 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:53.994 "is_configured": true, 00:40:53.994 "data_offset": 0, 00:40:53.994 "data_size": 65536 00:40:53.994 }, 00:40:53.994 { 00:40:53.994 "name": "BaseBdev2", 00:40:53.994 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:53.994 "is_configured": true, 00:40:53.994 "data_offset": 0, 00:40:53.994 "data_size": 65536 00:40:53.994 } 00:40:53.994 ] 00:40:53.994 }' 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:40:53.994 16:17:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:40:53.994 [2024-07-22 16:17:58.247711] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:40:53.994 [2024-07-22 16:17:58.250550] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:55.369 "name": "raid_bdev1", 00:40:55.369 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:55.369 "strip_size_kb": 0, 00:40:55.369 "state": "online", 00:40:55.369 "raid_level": "raid1", 00:40:55.369 "superblock": false, 00:40:55.369 "num_base_bdevs": 2, 00:40:55.369 "num_base_bdevs_discovered": 2, 00:40:55.369 "num_base_bdevs_operational": 2, 00:40:55.369 "base_bdevs_list": [ 00:40:55.369 { 00:40:55.369 "name": "spare", 00:40:55.369 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:55.369 "is_configured": true, 00:40:55.369 "data_offset": 0, 00:40:55.369 "data_size": 65536 00:40:55.369 }, 00:40:55.369 { 00:40:55.369 "name": "BaseBdev2", 00:40:55.369 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:55.369 "is_configured": true, 00:40:55.369 "data_offset": 0, 00:40:55.369 "data_size": 65536 00:40:55.369 } 00:40:55.369 ] 00:40:55.369 }' 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@660 -- # break 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.369 16:17:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.628 16:17:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:40:55.628 "name": "raid_bdev1", 00:40:55.628 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:55.628 "strip_size_kb": 0, 00:40:55.628 "state": "online", 00:40:55.628 "raid_level": "raid1", 00:40:55.628 "superblock": false, 00:40:55.628 "num_base_bdevs": 2, 00:40:55.628 "num_base_bdevs_discovered": 2, 00:40:55.628 "num_base_bdevs_operational": 2, 00:40:55.628 "base_bdevs_list": [ 00:40:55.628 { 00:40:55.628 "name": "spare", 00:40:55.628 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:55.628 "is_configured": true, 00:40:55.628 "data_offset": 0, 00:40:55.628 "data_size": 65536 00:40:55.628 }, 00:40:55.628 { 00:40:55.628 "name": "BaseBdev2", 00:40:55.628 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:55.628 "is_configured": true, 00:40:55.628 "data_offset": 0, 00:40:55.628 "data_size": 65536 00:40:55.628 } 00:40:55.628 ] 00:40:55.628 }' 00:40:55.628 16:17:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:40:55.628 16:17:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:40:55.628 16:17:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:55.886 16:17:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:40:55.886 16:18:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:40:55.886 "name": "raid_bdev1", 00:40:55.886 "uuid": "3d553b8a-12a8-4ad9-85dc-2d73bccb40d9", 00:40:55.886 "strip_size_kb": 0, 00:40:55.886 "state": "online", 00:40:55.886 "raid_level": "raid1", 00:40:55.886 "superblock": false, 00:40:55.886 "num_base_bdevs": 2, 00:40:55.886 "num_base_bdevs_discovered": 2, 00:40:55.886 "num_base_bdevs_operational": 2, 00:40:55.886 "base_bdevs_list": [ 00:40:55.886 { 00:40:55.886 "name": "spare", 00:40:55.886 "uuid": "22217b51-966c-5325-897b-4f10bb03a774", 00:40:55.886 "is_configured": true, 00:40:55.886 "data_offset": 0, 00:40:55.886 "data_size": 65536 00:40:55.886 }, 00:40:55.886 { 00:40:55.886 "name": "BaseBdev2", 00:40:55.886 "uuid": "be6bfbeb-542d-485e-835f-006856352801", 00:40:55.886 "is_configured": true, 00:40:55.886 "data_offset": 0, 00:40:55.886 "data_size": 65536 00:40:55.886 } 00:40:55.886 ] 00:40:55.886 }' 00:40:55.886 16:18:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:40:55.886 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:40:56.453 16:18:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:40:56.453 [2024-07-22 16:18:00.716763] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:40:56.453 [2024-07-22 16:18:00.716827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:40:56.744 00:40:56.744 Latency(us) 00:40:56.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:56.744 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:40:56.744 raid_bdev1 : 11.17 96.80 290.39 0.00 0.00 13842.22 268.10 127735.62 00:40:56.744 =================================================================================================================== 00:40:56.744 Total : 96.80 290.39 0.00 0.00 13842.22 268.10 127735.62 00:40:56.744 [2024-07-22 16:18:00.842791] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:40:56.744 [2024-07-22 16:18:00.842896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:40:56.744 [2024-07-22 16:18:00.843231] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr0 00:40:56.744 ee all in destruct 00:40:56.744 [2024-07-22 16:18:00.843416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:40:56.744 16:18:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:40:56.744 16:18:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:40:57.007 16:18:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:40:57.007 16:18:01 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:40:57.007 16:18:01 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@12 -- # local i 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:57.008 16:18:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:40:57.266 /dev/nbd0 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:57.266 16:18:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:40:57.266 16:18:01 -- common/autotest_common.sh@857 -- # local i 00:40:57.266 16:18:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:57.266 16:18:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:57.266 16:18:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:40:57.266 16:18:01 -- common/autotest_common.sh@861 -- # break 00:40:57.266 16:18:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:57.266 16:18:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:57.266 16:18:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:57.266 1+0 records in 00:40:57.266 1+0 records out 00:40:57.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476214 s, 8.6 MB/s 00:40:57.266 16:18:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:57.266 16:18:01 -- common/autotest_common.sh@874 -- # size=4096 00:40:57.266 16:18:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:57.266 16:18:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:57.266 16:18:01 -- common/autotest_common.sh@877 -- # return 0 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:57.266 16:18:01 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:40:57.266 16:18:01 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:40:57.266 16:18:01 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@12 -- # local i 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:57.266 16:18:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:40:57.524 /dev/nbd1 00:40:57.524 16:18:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:57.524 16:18:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:57.524 16:18:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:40:57.524 16:18:01 -- common/autotest_common.sh@857 -- # local i 00:40:57.524 16:18:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:40:57.524 16:18:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:40:57.524 16:18:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:40:57.524 16:18:01 -- common/autotest_common.sh@861 -- # break 00:40:57.524 16:18:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:40:57.524 16:18:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:40:57.524 16:18:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:57.524 1+0 records in 00:40:57.524 1+0 records out 00:40:57.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489984 s, 8.4 MB/s 00:40:57.524 16:18:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:57.524 16:18:01 -- common/autotest_common.sh@874 -- # size=4096 00:40:57.524 16:18:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:40:57.524 16:18:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:40:57.524 16:18:01 -- common/autotest_common.sh@877 -- # return 0 00:40:57.524 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:57.524 16:18:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:40:57.524 16:18:01 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:40:57.781 16:18:01 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@51 -- # local i 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:57.781 16:18:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@41 -- # break 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@45 -- # return 0 00:40:58.040 16:18:02 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@51 -- # local i 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:58.040 16:18:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@41 -- # break 00:40:58.298 16:18:02 -- bdev/nbd_common.sh@45 -- # return 0 00:40:58.298 16:18:02 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:40:58.298 16:18:02 -- bdev/bdev_raid.sh@709 -- # killprocess 80598 00:40:58.298 16:18:02 -- common/autotest_common.sh@926 -- # '[' -z 80598 ']' 00:40:58.298 16:18:02 -- common/autotest_common.sh@930 -- # kill -0 80598 00:40:58.298 16:18:02 -- common/autotest_common.sh@931 -- # uname 00:40:58.298 16:18:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:40:58.298 16:18:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80598 00:40:58.298 killing process with pid 80598 00:40:58.298 Received shutdown signal, test time was about 12.835570 seconds 00:40:58.298 00:40:58.298 Latency(us) 00:40:58.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:58.298 =================================================================================================================== 00:40:58.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:58.298 16:18:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:40:58.298 16:18:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:40:58.298 16:18:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80598' 00:40:58.298 16:18:02 -- common/autotest_common.sh@945 -- # kill 80598 00:40:58.298 16:18:02 -- common/autotest_common.sh@950 -- # wait 80598 00:40:58.298 [2024-07-22 16:18:02.484336] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:40:58.556 [2024-07-22 16:18:02.696387] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:40:59.941 ************************************ 00:40:59.941 END TEST raid_rebuild_test_io 00:40:59.941 ************************************ 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:40:59.941 00:40:59.941 real 0m18.345s 00:40:59.941 user 0m26.259s 00:40:59.941 sys 0m2.544s 00:40:59.941 16:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:59.941 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:40:59.941 16:18:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:40:59.941 16:18:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:40:59.941 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:40:59.941 ************************************ 00:40:59.941 START TEST raid_rebuild_test_sb_io 00:40:59.941 ************************************ 00:40:59.941 16:18:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=81053 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81053 /var/tmp/spdk-raid.sock 00:40:59.941 16:18:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:40:59.941 16:18:04 -- common/autotest_common.sh@819 -- # '[' -z 81053 ']' 00:40:59.941 16:18:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:40:59.941 16:18:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:40:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:40:59.941 16:18:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:40:59.941 16:18:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:40:59.941 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:40:59.941 [2024-07-22 16:18:04.171229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:40:59.941 I/O size of 3145728 is greater than zero copy threshold (65536). 00:40:59.941 Zero copy mechanism will not be used. 00:40:59.941 [2024-07-22 16:18:04.171402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81053 ] 00:41:00.200 [2024-07-22 16:18:04.339259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.465 [2024-07-22 16:18:04.603149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.731 [2024-07-22 16:18:04.820317] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:00.989 16:18:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:00.989 16:18:05 -- common/autotest_common.sh@852 -- # return 0 00:41:00.989 16:18:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:00.989 16:18:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:00.989 16:18:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:01.266 BaseBdev1_malloc 00:41:01.266 16:18:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:01.524 [2024-07-22 16:18:05.690791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:01.524 [2024-07-22 16:18:05.690926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:01.524 [2024-07-22 16:18:05.690982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:41:01.524 [2024-07-22 16:18:05.691021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:01.524 [2024-07-22 16:18:05.694193] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:01.524 [2024-07-22 16:18:05.694248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:01.524 BaseBdev1 00:41:01.524 16:18:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:01.524 16:18:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:01.524 16:18:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:01.796 BaseBdev2_malloc 00:41:01.796 16:18:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:02.055 [2024-07-22 16:18:06.180831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:02.055 [2024-07-22 16:18:06.180953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:02.055 [2024-07-22 16:18:06.181021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:41:02.055 [2024-07-22 16:18:06.181050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:02.055 [2024-07-22 16:18:06.184225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:02.055 [2024-07-22 16:18:06.184278] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:02.055 BaseBdev2 00:41:02.055 16:18:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:41:02.313 spare_malloc 00:41:02.313 16:18:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:02.571 spare_delay 00:41:02.571 16:18:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:02.830 [2024-07-22 16:18:07.015137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:02.830 [2024-07-22 16:18:07.015251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:02.830 [2024-07-22 16:18:07.015294] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:41:02.830 [2024-07-22 16:18:07.015316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:02.830 [2024-07-22 16:18:07.018730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:02.830 [2024-07-22 16:18:07.018803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:02.830 spare 00:41:02.830 16:18:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:41:03.111 [2024-07-22 16:18:07.291434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:03.111 [2024-07-22 16:18:07.294193] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:03.111 [2024-07-22 16:18:07.294501] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:41:03.111 [2024-07-22 16:18:07.294551] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:03.111 [2024-07-22 16:18:07.294825] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:41:03.111 [2024-07-22 16:18:07.295465] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:41:03.111 [2024-07-22 16:18:07.295511] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:41:03.111 [2024-07-22 16:18:07.295844] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:03.111 16:18:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:03.369 16:18:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:03.369 "name": "raid_bdev1", 00:41:03.369 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:03.369 "strip_size_kb": 0, 00:41:03.369 "state": "online", 00:41:03.369 "raid_level": "raid1", 00:41:03.369 "superblock": true, 00:41:03.369 "num_base_bdevs": 2, 00:41:03.369 "num_base_bdevs_discovered": 2, 00:41:03.369 "num_base_bdevs_operational": 2, 00:41:03.369 "base_bdevs_list": [ 00:41:03.369 { 00:41:03.369 "name": "BaseBdev1", 00:41:03.369 "uuid": "2f29a93e-034e-54ab-82cb-91640dabc6d2", 00:41:03.369 "is_configured": true, 00:41:03.369 "data_offset": 2048, 00:41:03.369 "data_size": 63488 00:41:03.369 }, 00:41:03.369 { 00:41:03.369 "name": "BaseBdev2", 00:41:03.369 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:03.369 "is_configured": true, 00:41:03.369 "data_offset": 2048, 00:41:03.369 "data_size": 63488 00:41:03.369 } 00:41:03.369 ] 00:41:03.369 }' 00:41:03.369 16:18:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:03.369 16:18:07 -- common/autotest_common.sh@10 -- # set +x 00:41:03.935 16:18:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:03.935 16:18:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:41:03.935 [2024-07-22 16:18:08.160375] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:03.935 16:18:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:41:03.935 16:18:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:03.935 16:18:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:04.499 16:18:08 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:41:04.499 16:18:08 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:41:04.499 16:18:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:04.499 16:18:08 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:41:04.499 [2024-07-22 16:18:08.604862] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:41:04.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:04.499 Zero copy mechanism will not be used. 00:41:04.499 Running I/O for 60 seconds... 00:41:04.499 [2024-07-22 16:18:08.767648] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:04.757 [2024-07-22 16:18:08.775772] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:04.757 16:18:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:05.015 16:18:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:05.015 "name": "raid_bdev1", 00:41:05.015 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:05.015 "strip_size_kb": 0, 00:41:05.015 "state": "online", 00:41:05.015 "raid_level": "raid1", 00:41:05.015 "superblock": true, 00:41:05.015 "num_base_bdevs": 2, 00:41:05.015 "num_base_bdevs_discovered": 1, 00:41:05.015 "num_base_bdevs_operational": 1, 00:41:05.015 "base_bdevs_list": [ 00:41:05.015 { 00:41:05.015 "name": null, 00:41:05.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:05.015 "is_configured": false, 00:41:05.015 "data_offset": 2048, 00:41:05.015 "data_size": 63488 00:41:05.015 }, 00:41:05.015 { 00:41:05.015 "name": "BaseBdev2", 00:41:05.015 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:05.015 "is_configured": true, 00:41:05.015 "data_offset": 2048, 00:41:05.015 "data_size": 63488 00:41:05.015 } 00:41:05.015 ] 00:41:05.015 }' 00:41:05.015 16:18:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:05.015 16:18:09 -- common/autotest_common.sh@10 -- # set +x 00:41:05.273 16:18:09 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:05.530 [2024-07-22 16:18:09.686951] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:05.530 [2024-07-22 16:18:09.687126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:05.530 [2024-07-22 16:18:09.727121] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:41:05.530 [2024-07-22 16:18:09.729895] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:05.531 16:18:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:41:05.788 [2024-07-22 16:18:09.841134] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:05.788 [2024-07-22 16:18:09.842018] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:06.046 [2024-07-22 16:18:10.081589] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:06.046 [2024-07-22 16:18:10.082271] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:06.304 [2024-07-22 16:18:10.350706] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:41:06.561 [2024-07-22 16:18:10.585340] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:06.561 [2024-07-22 16:18:10.585775] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:06.561 16:18:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:06.819 [2024-07-22 16:18:10.933361] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:06.819 16:18:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:06.819 "name": "raid_bdev1", 00:41:06.819 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:06.819 "strip_size_kb": 0, 00:41:06.819 "state": "online", 00:41:06.819 "raid_level": "raid1", 00:41:06.819 "superblock": true, 00:41:06.819 "num_base_bdevs": 2, 00:41:06.819 "num_base_bdevs_discovered": 2, 00:41:06.819 "num_base_bdevs_operational": 2, 00:41:06.819 "process": { 00:41:06.819 "type": "rebuild", 00:41:06.819 "target": "spare", 00:41:06.819 "progress": { 00:41:06.819 "blocks": 14336, 00:41:06.819 "percent": 22 00:41:06.819 } 00:41:06.819 }, 00:41:06.819 "base_bdevs_list": [ 00:41:06.819 { 00:41:06.819 "name": "spare", 00:41:06.819 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:06.819 "is_configured": true, 00:41:06.820 "data_offset": 2048, 00:41:06.820 "data_size": 63488 00:41:06.820 }, 00:41:06.820 { 00:41:06.820 "name": "BaseBdev2", 00:41:06.820 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:06.820 "is_configured": true, 00:41:06.820 "data_offset": 2048, 00:41:06.820 "data_size": 63488 00:41:06.820 } 00:41:06.820 ] 00:41:06.820 }' 00:41:06.820 16:18:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:06.820 16:18:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:06.820 16:18:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:06.820 16:18:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:06.820 16:18:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:07.078 [2024-07-22 16:18:11.158032] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:07.078 [2024-07-22 16:18:11.312928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:07.343 [2024-07-22 16:18:11.371747] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:41:07.343 [2024-07-22 16:18:11.473394] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:07.343 [2024-07-22 16:18:11.489685] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:07.343 [2024-07-22 16:18:11.528339] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:07.343 16:18:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:07.602 16:18:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:07.602 "name": "raid_bdev1", 00:41:07.602 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:07.602 "strip_size_kb": 0, 00:41:07.602 "state": "online", 00:41:07.602 "raid_level": "raid1", 00:41:07.602 "superblock": true, 00:41:07.602 "num_base_bdevs": 2, 00:41:07.602 "num_base_bdevs_discovered": 1, 00:41:07.602 "num_base_bdevs_operational": 1, 00:41:07.602 "base_bdevs_list": [ 00:41:07.602 { 00:41:07.602 "name": null, 00:41:07.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:07.602 "is_configured": false, 00:41:07.602 "data_offset": 2048, 00:41:07.602 "data_size": 63488 00:41:07.602 }, 00:41:07.602 { 00:41:07.602 "name": "BaseBdev2", 00:41:07.602 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:07.602 "is_configured": true, 00:41:07.602 "data_offset": 2048, 00:41:07.602 "data_size": 63488 00:41:07.602 } 00:41:07.602 ] 00:41:07.602 }' 00:41:07.602 16:18:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:07.602 16:18:11 -- common/autotest_common.sh@10 -- # set +x 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:08.175 16:18:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:08.440 "name": "raid_bdev1", 00:41:08.440 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:08.440 "strip_size_kb": 0, 00:41:08.440 "state": "online", 00:41:08.440 "raid_level": "raid1", 00:41:08.440 "superblock": true, 00:41:08.440 "num_base_bdevs": 2, 00:41:08.440 "num_base_bdevs_discovered": 1, 00:41:08.440 "num_base_bdevs_operational": 1, 00:41:08.440 "base_bdevs_list": [ 00:41:08.440 { 00:41:08.440 "name": null, 00:41:08.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:08.440 "is_configured": false, 00:41:08.440 "data_offset": 2048, 00:41:08.440 "data_size": 63488 00:41:08.440 }, 00:41:08.440 { 00:41:08.440 "name": "BaseBdev2", 00:41:08.440 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:08.440 "is_configured": true, 00:41:08.440 "data_offset": 2048, 00:41:08.440 "data_size": 63488 00:41:08.440 } 00:41:08.440 ] 00:41:08.440 }' 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:08.440 16:18:12 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:08.698 [2024-07-22 16:18:12.826252] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:08.698 [2024-07-22 16:18:12.826342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:08.698 16:18:12 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:41:08.698 [2024-07-22 16:18:12.887698] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:41:08.698 [2024-07-22 16:18:12.890270] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:08.955 [2024-07-22 16:18:13.007641] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:08.955 [2024-07-22 16:18:13.008335] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:41:08.955 [2024-07-22 16:18:13.121012] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:08.955 [2024-07-22 16:18:13.121359] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:41:09.521 [2024-07-22 16:18:13.490868] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:41:09.521 [2024-07-22 16:18:13.701460] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:09.521 [2024-07-22 16:18:13.701895] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:09.779 16:18:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:09.779 [2024-07-22 16:18:13.950215] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:41:10.038 [2024-07-22 16:18:14.078982] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:10.039 "name": "raid_bdev1", 00:41:10.039 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:10.039 "strip_size_kb": 0, 00:41:10.039 "state": "online", 00:41:10.039 "raid_level": "raid1", 00:41:10.039 "superblock": true, 00:41:10.039 "num_base_bdevs": 2, 00:41:10.039 "num_base_bdevs_discovered": 2, 00:41:10.039 "num_base_bdevs_operational": 2, 00:41:10.039 "process": { 00:41:10.039 "type": "rebuild", 00:41:10.039 "target": "spare", 00:41:10.039 "progress": { 00:41:10.039 "blocks": 16384, 00:41:10.039 "percent": 25 00:41:10.039 } 00:41:10.039 }, 00:41:10.039 "base_bdevs_list": [ 00:41:10.039 { 00:41:10.039 "name": "spare", 00:41:10.039 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:10.039 "is_configured": true, 00:41:10.039 "data_offset": 2048, 00:41:10.039 "data_size": 63488 00:41:10.039 }, 00:41:10.039 { 00:41:10.039 "name": "BaseBdev2", 00:41:10.039 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:10.039 "is_configured": true, 00:41:10.039 "data_offset": 2048, 00:41:10.039 "data_size": 63488 00:41:10.039 } 00:41:10.039 ] 00:41:10.039 }' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:41:10.039 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@657 -- # local timeout=459 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:10.039 16:18:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:10.297 "name": "raid_bdev1", 00:41:10.297 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:10.297 "strip_size_kb": 0, 00:41:10.297 "state": "online", 00:41:10.297 "raid_level": "raid1", 00:41:10.297 "superblock": true, 00:41:10.297 "num_base_bdevs": 2, 00:41:10.297 "num_base_bdevs_discovered": 2, 00:41:10.297 "num_base_bdevs_operational": 2, 00:41:10.297 "process": { 00:41:10.297 "type": "rebuild", 00:41:10.297 "target": "spare", 00:41:10.297 "progress": { 00:41:10.297 "blocks": 18432, 00:41:10.297 "percent": 29 00:41:10.297 } 00:41:10.297 }, 00:41:10.297 "base_bdevs_list": [ 00:41:10.297 { 00:41:10.297 "name": "spare", 00:41:10.297 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:10.297 "is_configured": true, 00:41:10.297 "data_offset": 2048, 00:41:10.297 "data_size": 63488 00:41:10.297 }, 00:41:10.297 { 00:41:10.297 "name": "BaseBdev2", 00:41:10.297 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:10.297 "is_configured": true, 00:41:10.297 "data_offset": 2048, 00:41:10.297 "data_size": 63488 00:41:10.297 } 00:41:10.297 ] 00:41:10.297 }' 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:10.297 16:18:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:10.864 [2024-07-22 16:18:14.844401] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:41:10.864 [2024-07-22 16:18:14.844810] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:41:11.123 [2024-07-22 16:18:15.175327] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:11.381 16:18:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:11.640 [2024-07-22 16:18:15.762124] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:11.640 "name": "raid_bdev1", 00:41:11.640 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:11.640 "strip_size_kb": 0, 00:41:11.640 "state": "online", 00:41:11.640 "raid_level": "raid1", 00:41:11.640 "superblock": true, 00:41:11.640 "num_base_bdevs": 2, 00:41:11.640 "num_base_bdevs_discovered": 2, 00:41:11.640 "num_base_bdevs_operational": 2, 00:41:11.640 "process": { 00:41:11.640 "type": "rebuild", 00:41:11.640 "target": "spare", 00:41:11.640 "progress": { 00:41:11.640 "blocks": 38912, 00:41:11.640 "percent": 61 00:41:11.640 } 00:41:11.640 }, 00:41:11.640 "base_bdevs_list": [ 00:41:11.640 { 00:41:11.640 "name": "spare", 00:41:11.640 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:11.640 "is_configured": true, 00:41:11.640 "data_offset": 2048, 00:41:11.640 "data_size": 63488 00:41:11.640 }, 00:41:11.640 { 00:41:11.640 "name": "BaseBdev2", 00:41:11.640 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:11.640 "is_configured": true, 00:41:11.640 "data_offset": 2048, 00:41:11.640 "data_size": 63488 00:41:11.640 } 00:41:11.640 ] 00:41:11.640 }' 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:11.640 16:18:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:12.207 [2024-07-22 16:18:16.237275] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:41:12.207 [2024-07-22 16:18:16.467818] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:12.774 16:18:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:13.056 "name": "raid_bdev1", 00:41:13.056 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:13.056 "strip_size_kb": 0, 00:41:13.056 "state": "online", 00:41:13.056 "raid_level": "raid1", 00:41:13.056 "superblock": true, 00:41:13.056 "num_base_bdevs": 2, 00:41:13.056 "num_base_bdevs_discovered": 2, 00:41:13.056 "num_base_bdevs_operational": 2, 00:41:13.056 "process": { 00:41:13.056 "type": "rebuild", 00:41:13.056 "target": "spare", 00:41:13.056 "progress": { 00:41:13.056 "blocks": 59392, 00:41:13.056 "percent": 93 00:41:13.056 } 00:41:13.056 }, 00:41:13.056 "base_bdevs_list": [ 00:41:13.056 { 00:41:13.056 "name": "spare", 00:41:13.056 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:13.056 "is_configured": true, 00:41:13.056 "data_offset": 2048, 00:41:13.056 "data_size": 63488 00:41:13.056 }, 00:41:13.056 { 00:41:13.056 "name": "BaseBdev2", 00:41:13.056 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:13.056 "is_configured": true, 00:41:13.056 "data_offset": 2048, 00:41:13.056 "data_size": 63488 00:41:13.056 } 00:41:13.056 ] 00:41:13.056 }' 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:13.056 16:18:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:13.056 [2024-07-22 16:18:17.257075] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:13.314 [2024-07-22 16:18:17.357130] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:13.314 [2024-07-22 16:18:17.359819] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:13.880 16:18:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.138 16:18:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:14.138 "name": "raid_bdev1", 00:41:14.138 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:14.138 "strip_size_kb": 0, 00:41:14.138 "state": "online", 00:41:14.138 "raid_level": "raid1", 00:41:14.138 "superblock": true, 00:41:14.138 "num_base_bdevs": 2, 00:41:14.138 "num_base_bdevs_discovered": 2, 00:41:14.138 "num_base_bdevs_operational": 2, 00:41:14.138 "base_bdevs_list": [ 00:41:14.138 { 00:41:14.138 "name": "spare", 00:41:14.138 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:14.138 "is_configured": true, 00:41:14.138 "data_offset": 2048, 00:41:14.138 "data_size": 63488 00:41:14.138 }, 00:41:14.138 { 00:41:14.138 "name": "BaseBdev2", 00:41:14.138 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:14.138 "is_configured": true, 00:41:14.138 "data_offset": 2048, 00:41:14.138 "data_size": 63488 00:41:14.138 } 00:41:14.138 ] 00:41:14.138 }' 00:41:14.138 16:18:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:14.138 16:18:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@660 -- # break 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:14.396 "name": "raid_bdev1", 00:41:14.396 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:14.396 "strip_size_kb": 0, 00:41:14.396 "state": "online", 00:41:14.396 "raid_level": "raid1", 00:41:14.396 "superblock": true, 00:41:14.396 "num_base_bdevs": 2, 00:41:14.396 "num_base_bdevs_discovered": 2, 00:41:14.396 "num_base_bdevs_operational": 2, 00:41:14.396 "base_bdevs_list": [ 00:41:14.396 { 00:41:14.396 "name": "spare", 00:41:14.396 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:14.396 "is_configured": true, 00:41:14.396 "data_offset": 2048, 00:41:14.396 "data_size": 63488 00:41:14.396 }, 00:41:14.396 { 00:41:14.396 "name": "BaseBdev2", 00:41:14.396 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:14.396 "is_configured": true, 00:41:14.396 "data_offset": 2048, 00:41:14.396 "data_size": 63488 00:41:14.396 } 00:41:14.396 ] 00:41:14.396 }' 00:41:14.396 16:18:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:14.655 16:18:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:14.913 16:18:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:14.913 "name": "raid_bdev1", 00:41:14.913 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:14.913 "strip_size_kb": 0, 00:41:14.913 "state": "online", 00:41:14.913 "raid_level": "raid1", 00:41:14.913 "superblock": true, 00:41:14.913 "num_base_bdevs": 2, 00:41:14.913 "num_base_bdevs_discovered": 2, 00:41:14.913 "num_base_bdevs_operational": 2, 00:41:14.913 "base_bdevs_list": [ 00:41:14.913 { 00:41:14.913 "name": "spare", 00:41:14.913 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:14.913 "is_configured": true, 00:41:14.913 "data_offset": 2048, 00:41:14.913 "data_size": 63488 00:41:14.913 }, 00:41:14.913 { 00:41:14.913 "name": "BaseBdev2", 00:41:14.913 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:14.914 "is_configured": true, 00:41:14.914 "data_offset": 2048, 00:41:14.914 "data_size": 63488 00:41:14.914 } 00:41:14.914 ] 00:41:14.914 }' 00:41:14.914 16:18:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:14.914 16:18:18 -- common/autotest_common.sh@10 -- # set +x 00:41:15.172 16:18:19 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:15.430 [2024-07-22 16:18:19.514535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:15.430 [2024-07-22 16:18:19.514607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:15.430 00:41:15.430 Latency(us) 00:41:15.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:15.430 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:41:15.430 raid_bdev1 : 10.99 91.59 274.78 0.00 0.00 15272.97 288.58 126782.37 00:41:15.430 =================================================================================================================== 00:41:15.430 Total : 91.59 274.78 0.00 0.00 15272.97 288.58 126782.37 00:41:15.430 [2024-07-22 16:18:19.621911] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:15.430 [2024-07-22 16:18:19.622013] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:15.430 0 00:41:15.430 [2024-07-22 16:18:19.622156] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:15.430 [2024-07-22 16:18:19.622174] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:41:15.430 16:18:19 -- bdev/bdev_raid.sh@671 -- # jq length 00:41:15.430 16:18:19 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:15.688 16:18:19 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:41:15.688 16:18:19 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:41:15.688 16:18:19 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@12 -- # local i 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:15.688 16:18:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:41:15.947 /dev/nbd0 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:15.947 16:18:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:41:15.947 16:18:20 -- common/autotest_common.sh@857 -- # local i 00:41:15.947 16:18:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:15.947 16:18:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:15.947 16:18:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:41:15.947 16:18:20 -- common/autotest_common.sh@861 -- # break 00:41:15.947 16:18:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:15.947 16:18:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:15.947 16:18:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:15.947 1+0 records in 00:41:15.947 1+0 records out 00:41:15.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282301 s, 14.5 MB/s 00:41:15.947 16:18:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:15.947 16:18:20 -- common/autotest_common.sh@874 -- # size=4096 00:41:15.947 16:18:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:15.947 16:18:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:15.947 16:18:20 -- common/autotest_common.sh@877 -- # return 0 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:15.947 16:18:20 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:41:15.947 16:18:20 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:41:15.947 16:18:20 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@12 -- # local i 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:15.947 16:18:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:41:16.206 /dev/nbd1 00:41:16.206 16:18:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:16.206 16:18:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:16.206 16:18:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:41:16.206 16:18:20 -- common/autotest_common.sh@857 -- # local i 00:41:16.206 16:18:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:16.206 16:18:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:16.206 16:18:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:41:16.206 16:18:20 -- common/autotest_common.sh@861 -- # break 00:41:16.206 16:18:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:16.206 16:18:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:16.206 16:18:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:16.206 1+0 records in 00:41:16.206 1+0 records out 00:41:16.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335591 s, 12.2 MB/s 00:41:16.206 16:18:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:16.206 16:18:20 -- common/autotest_common.sh@874 -- # size=4096 00:41:16.206 16:18:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:16.206 16:18:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:16.206 16:18:20 -- common/autotest_common.sh@877 -- # return 0 00:41:16.206 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:16.206 16:18:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:16.206 16:18:20 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:41:16.504 16:18:20 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@51 -- # local i 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:16.504 16:18:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@41 -- # break 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@45 -- # return 0 00:41:16.762 16:18:20 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@51 -- # local i 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:16.762 16:18:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@41 -- # break 00:41:17.020 16:18:21 -- bdev/nbd_common.sh@45 -- # return 0 00:41:17.020 16:18:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:41:17.020 16:18:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:41:17.020 16:18:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:41:17.020 16:18:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:41:17.278 16:18:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:17.536 [2024-07-22 16:18:21.739265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:17.536 [2024-07-22 16:18:21.739425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:17.536 [2024-07-22 16:18:21.739475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:41:17.536 [2024-07-22 16:18:21.739492] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:17.536 [2024-07-22 16:18:21.742776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:17.536 [2024-07-22 16:18:21.742838] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:17.536 [2024-07-22 16:18:21.742985] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:41:17.536 [2024-07-22 16:18:21.743113] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:17.536 BaseBdev1 00:41:17.536 16:18:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:41:17.536 16:18:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:41:17.536 16:18:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:41:17.794 16:18:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:18.053 [2024-07-22 16:18:22.259715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:18.053 [2024-07-22 16:18:22.259860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:18.053 [2024-07-22 16:18:22.259929] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:41:18.053 [2024-07-22 16:18:22.259947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:18.053 [2024-07-22 16:18:22.260612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:18.053 [2024-07-22 16:18:22.260702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:18.053 [2024-07-22 16:18:22.260858] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:41:18.053 [2024-07-22 16:18:22.260876] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:41:18.053 [2024-07-22 16:18:22.260892] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:18.053 [2024-07-22 16:18:22.260969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:41:18.053 [2024-07-22 16:18:22.261084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:18.053 BaseBdev2 00:41:18.053 16:18:22 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:41:18.311 16:18:22 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:18.570 [2024-07-22 16:18:22.779967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:18.570 [2024-07-22 16:18:22.780119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:18.570 [2024-07-22 16:18:22.780161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:41:18.570 [2024-07-22 16:18:22.780182] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:18.570 [2024-07-22 16:18:22.780851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:18.570 [2024-07-22 16:18:22.780893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:18.570 [2024-07-22 16:18:22.781041] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:41:18.570 [2024-07-22 16:18:22.781085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:18.570 spare 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:18.570 16:18:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:18.829 [2024-07-22 16:18:22.881243] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:41:18.829 [2024-07-22 16:18:22.881333] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:18.829 [2024-07-22 16:18:22.881579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a7e0 00:41:18.829 [2024-07-22 16:18:22.882196] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:41:18.829 [2024-07-22 16:18:22.882226] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:41:18.829 [2024-07-22 16:18:22.882503] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:18.829 16:18:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:18.829 "name": "raid_bdev1", 00:41:18.829 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:18.829 "strip_size_kb": 0, 00:41:18.829 "state": "online", 00:41:18.829 "raid_level": "raid1", 00:41:18.829 "superblock": true, 00:41:18.829 "num_base_bdevs": 2, 00:41:18.829 "num_base_bdevs_discovered": 2, 00:41:18.829 "num_base_bdevs_operational": 2, 00:41:18.829 "base_bdevs_list": [ 00:41:18.829 { 00:41:18.829 "name": "spare", 00:41:18.829 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:18.829 "is_configured": true, 00:41:18.829 "data_offset": 2048, 00:41:18.829 "data_size": 63488 00:41:18.829 }, 00:41:18.829 { 00:41:18.829 "name": "BaseBdev2", 00:41:18.829 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:18.829 "is_configured": true, 00:41:18.829 "data_offset": 2048, 00:41:18.829 "data_size": 63488 00:41:18.829 } 00:41:18.829 ] 00:41:18.829 }' 00:41:18.829 16:18:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:18.829 16:18:23 -- common/autotest_common.sh@10 -- # set +x 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:19.395 16:18:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:19.654 "name": "raid_bdev1", 00:41:19.654 "uuid": "ff7cf1d1-8835-4d4e-a82c-8433a56c9432", 00:41:19.654 "strip_size_kb": 0, 00:41:19.654 "state": "online", 00:41:19.654 "raid_level": "raid1", 00:41:19.654 "superblock": true, 00:41:19.654 "num_base_bdevs": 2, 00:41:19.654 "num_base_bdevs_discovered": 2, 00:41:19.654 "num_base_bdevs_operational": 2, 00:41:19.654 "base_bdevs_list": [ 00:41:19.654 { 00:41:19.654 "name": "spare", 00:41:19.654 "uuid": "25e95fcd-fabc-55df-8840-55ad20e27c23", 00:41:19.654 "is_configured": true, 00:41:19.654 "data_offset": 2048, 00:41:19.654 "data_size": 63488 00:41:19.654 }, 00:41:19.654 { 00:41:19.654 "name": "BaseBdev2", 00:41:19.654 "uuid": "17814f9c-7e97-542e-8dc7-eb31096479c4", 00:41:19.654 "is_configured": true, 00:41:19.654 "data_offset": 2048, 00:41:19.654 "data_size": 63488 00:41:19.654 } 00:41:19.654 ] 00:41:19.654 }' 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:19.654 16:18:23 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:41:19.927 16:18:23 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:41:19.927 16:18:23 -- bdev/bdev_raid.sh@709 -- # killprocess 81053 00:41:19.927 16:18:23 -- common/autotest_common.sh@926 -- # '[' -z 81053 ']' 00:41:19.927 16:18:23 -- common/autotest_common.sh@930 -- # kill -0 81053 00:41:19.927 16:18:23 -- common/autotest_common.sh@931 -- # uname 00:41:19.927 16:18:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:19.927 16:18:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81053 00:41:19.927 16:18:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:19.927 killing process with pid 81053 00:41:19.927 16:18:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:19.927 16:18:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81053' 00:41:19.927 16:18:24 -- common/autotest_common.sh@945 -- # kill 81053 00:41:19.927 Received shutdown signal, test time was about 15.401418 seconds 00:41:19.927 00:41:19.927 Latency(us) 00:41:19.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:19.927 =================================================================================================================== 00:41:19.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:19.927 16:18:24 -- common/autotest_common.sh@950 -- # wait 81053 00:41:19.927 [2024-07-22 16:18:24.009421] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:19.927 [2024-07-22 16:18:24.009558] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:19.927 [2024-07-22 16:18:24.009658] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:19.927 [2024-07-22 16:18:24.009693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:41:20.197 [2024-07-22 16:18:24.198860] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:41:21.575 00:41:21.575 real 0m21.467s 00:41:21.575 user 0m32.012s 00:41:21.575 sys 0m3.019s 00:41:21.575 16:18:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:21.575 16:18:25 -- common/autotest_common.sh@10 -- # set +x 00:41:21.575 ************************************ 00:41:21.575 END TEST raid_rebuild_test_sb_io 00:41:21.575 ************************************ 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:41:21.575 16:18:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:41:21.575 16:18:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:21.575 16:18:25 -- common/autotest_common.sh@10 -- # set +x 00:41:21.575 ************************************ 00:41:21.575 START TEST raid_rebuild_test 00:41:21.575 ************************************ 00:41:21.575 16:18:25 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=81584 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81584 /var/tmp/spdk-raid.sock 00:41:21.575 16:18:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:21.575 16:18:25 -- common/autotest_common.sh@819 -- # '[' -z 81584 ']' 00:41:21.575 16:18:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:21.575 16:18:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:21.575 16:18:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:21.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:21.575 16:18:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:21.575 16:18:25 -- common/autotest_common.sh@10 -- # set +x 00:41:21.575 [2024-07-22 16:18:25.722076] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:41:21.575 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:21.575 Zero copy mechanism will not be used. 00:41:21.575 [2024-07-22 16:18:25.722287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81584 ] 00:41:21.833 [2024-07-22 16:18:25.901350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:22.092 [2024-07-22 16:18:26.174683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.351 [2024-07-22 16:18:26.400267] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:22.609 16:18:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:22.609 16:18:26 -- common/autotest_common.sh@852 -- # return 0 00:41:22.609 16:18:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:22.609 16:18:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:41:22.609 16:18:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:41:22.868 BaseBdev1 00:41:22.868 16:18:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:22.868 16:18:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:41:22.868 16:18:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:41:23.137 BaseBdev2 00:41:23.137 16:18:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:23.137 16:18:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:41:23.137 16:18:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:41:23.395 BaseBdev3 00:41:23.395 16:18:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:23.395 16:18:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:41:23.395 16:18:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:41:23.655 BaseBdev4 00:41:23.655 16:18:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:41:23.912 spare_malloc 00:41:23.912 16:18:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:24.170 spare_delay 00:41:24.170 16:18:28 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:24.430 [2024-07-22 16:18:28.544698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:24.430 [2024-07-22 16:18:28.544823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:24.430 [2024-07-22 16:18:28.544865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:41:24.430 [2024-07-22 16:18:28.544896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:24.430 [2024-07-22 16:18:28.548070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:24.430 [2024-07-22 16:18:28.548138] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:24.430 spare 00:41:24.430 16:18:28 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:41:24.690 [2024-07-22 16:18:28.769052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:24.690 [2024-07-22 16:18:28.771581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:24.690 [2024-07-22 16:18:28.771662] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:24.690 [2024-07-22 16:18:28.771723] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:24.690 [2024-07-22 16:18:28.771820] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:41:24.690 [2024-07-22 16:18:28.771839] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:41:24.690 [2024-07-22 16:18:28.772090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:41:24.690 [2024-07-22 16:18:28.772561] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:41:24.690 [2024-07-22 16:18:28.772603] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:41:24.690 [2024-07-22 16:18:28.772929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:24.690 16:18:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:24.948 16:18:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:24.948 "name": "raid_bdev1", 00:41:24.948 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:24.948 "strip_size_kb": 0, 00:41:24.948 "state": "online", 00:41:24.948 "raid_level": "raid1", 00:41:24.948 "superblock": false, 00:41:24.948 "num_base_bdevs": 4, 00:41:24.948 "num_base_bdevs_discovered": 4, 00:41:24.948 "num_base_bdevs_operational": 4, 00:41:24.948 "base_bdevs_list": [ 00:41:24.948 { 00:41:24.948 "name": "BaseBdev1", 00:41:24.948 "uuid": "6648e072-4aa1-4454-9c17-3558a7b1c3e7", 00:41:24.948 "is_configured": true, 00:41:24.948 "data_offset": 0, 00:41:24.948 "data_size": 65536 00:41:24.948 }, 00:41:24.948 { 00:41:24.948 "name": "BaseBdev2", 00:41:24.948 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:24.948 "is_configured": true, 00:41:24.948 "data_offset": 0, 00:41:24.948 "data_size": 65536 00:41:24.948 }, 00:41:24.948 { 00:41:24.948 "name": "BaseBdev3", 00:41:24.948 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:24.948 "is_configured": true, 00:41:24.948 "data_offset": 0, 00:41:24.948 "data_size": 65536 00:41:24.948 }, 00:41:24.948 { 00:41:24.948 "name": "BaseBdev4", 00:41:24.948 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:24.948 "is_configured": true, 00:41:24.948 "data_offset": 0, 00:41:24.948 "data_size": 65536 00:41:24.948 } 00:41:24.948 ] 00:41:24.948 }' 00:41:24.948 16:18:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:24.948 16:18:29 -- common/autotest_common.sh@10 -- # set +x 00:41:25.206 16:18:29 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:25.206 16:18:29 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:41:25.465 [2024-07-22 16:18:29.629559] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:25.465 16:18:29 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:41:25.465 16:18:29 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:25.465 16:18:29 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:25.724 16:18:29 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:41:25.724 16:18:29 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:41:25.724 16:18:29 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:41:25.724 16:18:29 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@12 -- # local i 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:25.724 16:18:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:25.982 [2024-07-22 16:18:30.077528] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:41:25.982 /dev/nbd0 00:41:25.982 16:18:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:25.982 16:18:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:25.982 16:18:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:41:25.982 16:18:30 -- common/autotest_common.sh@857 -- # local i 00:41:25.982 16:18:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:25.982 16:18:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:25.982 16:18:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:41:25.982 16:18:30 -- common/autotest_common.sh@861 -- # break 00:41:25.982 16:18:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:25.982 16:18:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:25.982 16:18:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:25.982 1+0 records in 00:41:25.982 1+0 records out 00:41:25.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020483 s, 20.0 MB/s 00:41:25.982 16:18:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:25.982 16:18:30 -- common/autotest_common.sh@874 -- # size=4096 00:41:25.982 16:18:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:25.983 16:18:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:25.983 16:18:30 -- common/autotest_common.sh@877 -- # return 0 00:41:25.983 16:18:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:25.983 16:18:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:25.983 16:18:30 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:41:25.983 16:18:30 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:41:25.983 16:18:30 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:41:34.107 65536+0 records in 00:41:34.107 65536+0 records out 00:41:34.107 33554432 bytes (34 MB, 32 MiB) copied, 6.98825 s, 4.8 MB/s 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@51 -- # local i 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:34.107 [2024-07-22 16:18:37.328878] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@41 -- # break 00:41:34.107 16:18:37 -- bdev/nbd_common.sh@45 -- # return 0 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:34.107 [2024-07-22 16:18:37.541120] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:34.107 16:18:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:34.107 "name": "raid_bdev1", 00:41:34.107 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:34.107 "strip_size_kb": 0, 00:41:34.107 "state": "online", 00:41:34.107 "raid_level": "raid1", 00:41:34.107 "superblock": false, 00:41:34.107 "num_base_bdevs": 4, 00:41:34.107 "num_base_bdevs_discovered": 3, 00:41:34.107 "num_base_bdevs_operational": 3, 00:41:34.107 "base_bdevs_list": [ 00:41:34.107 { 00:41:34.107 "name": null, 00:41:34.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:34.107 "is_configured": false, 00:41:34.107 "data_offset": 0, 00:41:34.107 "data_size": 65536 00:41:34.107 }, 00:41:34.107 { 00:41:34.107 "name": "BaseBdev2", 00:41:34.107 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:34.107 "is_configured": true, 00:41:34.107 "data_offset": 0, 00:41:34.107 "data_size": 65536 00:41:34.107 }, 00:41:34.107 { 00:41:34.107 "name": "BaseBdev3", 00:41:34.107 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:34.107 "is_configured": true, 00:41:34.107 "data_offset": 0, 00:41:34.108 "data_size": 65536 00:41:34.108 }, 00:41:34.108 { 00:41:34.108 "name": "BaseBdev4", 00:41:34.108 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:34.108 "is_configured": true, 00:41:34.108 "data_offset": 0, 00:41:34.108 "data_size": 65536 00:41:34.108 } 00:41:34.108 ] 00:41:34.108 }' 00:41:34.108 16:18:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:34.108 16:18:37 -- common/autotest_common.sh@10 -- # set +x 00:41:34.108 16:18:38 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:34.108 [2024-07-22 16:18:38.269291] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:34.108 [2024-07-22 16:18:38.269399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:34.108 [2024-07-22 16:18:38.284182] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09620 00:41:34.108 [2024-07-22 16:18:38.287148] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:34.108 16:18:38 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:35.112 16:18:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:35.373 "name": "raid_bdev1", 00:41:35.373 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:35.373 "strip_size_kb": 0, 00:41:35.373 "state": "online", 00:41:35.373 "raid_level": "raid1", 00:41:35.373 "superblock": false, 00:41:35.373 "num_base_bdevs": 4, 00:41:35.373 "num_base_bdevs_discovered": 4, 00:41:35.373 "num_base_bdevs_operational": 4, 00:41:35.373 "process": { 00:41:35.373 "type": "rebuild", 00:41:35.373 "target": "spare", 00:41:35.373 "progress": { 00:41:35.373 "blocks": 24576, 00:41:35.373 "percent": 37 00:41:35.373 } 00:41:35.373 }, 00:41:35.373 "base_bdevs_list": [ 00:41:35.373 { 00:41:35.373 "name": "spare", 00:41:35.373 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:35.373 "is_configured": true, 00:41:35.373 "data_offset": 0, 00:41:35.373 "data_size": 65536 00:41:35.373 }, 00:41:35.373 { 00:41:35.373 "name": "BaseBdev2", 00:41:35.373 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:35.373 "is_configured": true, 00:41:35.373 "data_offset": 0, 00:41:35.373 "data_size": 65536 00:41:35.373 }, 00:41:35.373 { 00:41:35.373 "name": "BaseBdev3", 00:41:35.373 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:35.373 "is_configured": true, 00:41:35.373 "data_offset": 0, 00:41:35.373 "data_size": 65536 00:41:35.373 }, 00:41:35.373 { 00:41:35.373 "name": "BaseBdev4", 00:41:35.373 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:35.373 "is_configured": true, 00:41:35.373 "data_offset": 0, 00:41:35.373 "data_size": 65536 00:41:35.373 } 00:41:35.373 ] 00:41:35.373 }' 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:35.373 16:18:39 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:41:35.633 [2024-07-22 16:18:39.824903] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:41:35.633 [2024-07-22 16:18:39.902682] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:41:35.633 [2024-07-22 16:18:39.902818] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:35.891 16:18:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:36.150 16:18:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:36.150 "name": "raid_bdev1", 00:41:36.150 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:36.150 "strip_size_kb": 0, 00:41:36.150 "state": "online", 00:41:36.150 "raid_level": "raid1", 00:41:36.150 "superblock": false, 00:41:36.150 "num_base_bdevs": 4, 00:41:36.150 "num_base_bdevs_discovered": 3, 00:41:36.150 "num_base_bdevs_operational": 3, 00:41:36.150 "base_bdevs_list": [ 00:41:36.150 { 00:41:36.150 "name": null, 00:41:36.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:36.150 "is_configured": false, 00:41:36.150 "data_offset": 0, 00:41:36.150 "data_size": 65536 00:41:36.150 }, 00:41:36.150 { 00:41:36.150 "name": "BaseBdev2", 00:41:36.150 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:36.150 "is_configured": true, 00:41:36.150 "data_offset": 0, 00:41:36.150 "data_size": 65536 00:41:36.150 }, 00:41:36.150 { 00:41:36.150 "name": "BaseBdev3", 00:41:36.150 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:36.150 "is_configured": true, 00:41:36.150 "data_offset": 0, 00:41:36.150 "data_size": 65536 00:41:36.150 }, 00:41:36.150 { 00:41:36.150 "name": "BaseBdev4", 00:41:36.150 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:36.150 "is_configured": true, 00:41:36.150 "data_offset": 0, 00:41:36.150 "data_size": 65536 00:41:36.150 } 00:41:36.150 ] 00:41:36.150 }' 00:41:36.150 16:18:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:36.150 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:36.408 16:18:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:36.666 16:18:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:36.666 "name": "raid_bdev1", 00:41:36.666 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:36.666 "strip_size_kb": 0, 00:41:36.666 "state": "online", 00:41:36.666 "raid_level": "raid1", 00:41:36.666 "superblock": false, 00:41:36.666 "num_base_bdevs": 4, 00:41:36.666 "num_base_bdevs_discovered": 3, 00:41:36.666 "num_base_bdevs_operational": 3, 00:41:36.666 "base_bdevs_list": [ 00:41:36.666 { 00:41:36.666 "name": null, 00:41:36.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:36.666 "is_configured": false, 00:41:36.666 "data_offset": 0, 00:41:36.666 "data_size": 65536 00:41:36.666 }, 00:41:36.666 { 00:41:36.666 "name": "BaseBdev2", 00:41:36.666 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:36.666 "is_configured": true, 00:41:36.666 "data_offset": 0, 00:41:36.666 "data_size": 65536 00:41:36.666 }, 00:41:36.666 { 00:41:36.666 "name": "BaseBdev3", 00:41:36.666 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:36.666 "is_configured": true, 00:41:36.666 "data_offset": 0, 00:41:36.666 "data_size": 65536 00:41:36.666 }, 00:41:36.666 { 00:41:36.666 "name": "BaseBdev4", 00:41:36.666 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:36.666 "is_configured": true, 00:41:36.666 "data_offset": 0, 00:41:36.666 "data_size": 65536 00:41:36.666 } 00:41:36.666 ] 00:41:36.666 }' 00:41:36.666 16:18:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:36.666 16:18:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:36.667 16:18:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:36.667 16:18:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:36.667 16:18:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:36.924 [2024-07-22 16:18:41.157933] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:36.924 [2024-07-22 16:18:41.158039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:36.924 [2024-07-22 16:18:41.170368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:41:36.924 [2024-07-22 16:18:41.173175] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:36.924 16:18:41 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:38.314 "name": "raid_bdev1", 00:41:38.314 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:38.314 "strip_size_kb": 0, 00:41:38.314 "state": "online", 00:41:38.314 "raid_level": "raid1", 00:41:38.314 "superblock": false, 00:41:38.314 "num_base_bdevs": 4, 00:41:38.314 "num_base_bdevs_discovered": 4, 00:41:38.314 "num_base_bdevs_operational": 4, 00:41:38.314 "process": { 00:41:38.314 "type": "rebuild", 00:41:38.314 "target": "spare", 00:41:38.314 "progress": { 00:41:38.314 "blocks": 24576, 00:41:38.314 "percent": 37 00:41:38.314 } 00:41:38.314 }, 00:41:38.314 "base_bdevs_list": [ 00:41:38.314 { 00:41:38.314 "name": "spare", 00:41:38.314 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:38.314 "is_configured": true, 00:41:38.314 "data_offset": 0, 00:41:38.314 "data_size": 65536 00:41:38.314 }, 00:41:38.314 { 00:41:38.314 "name": "BaseBdev2", 00:41:38.314 "uuid": "601cc4b7-1f48-413f-9d9c-6fc85139817c", 00:41:38.314 "is_configured": true, 00:41:38.314 "data_offset": 0, 00:41:38.314 "data_size": 65536 00:41:38.314 }, 00:41:38.314 { 00:41:38.314 "name": "BaseBdev3", 00:41:38.314 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:38.314 "is_configured": true, 00:41:38.314 "data_offset": 0, 00:41:38.314 "data_size": 65536 00:41:38.314 }, 00:41:38.314 { 00:41:38.314 "name": "BaseBdev4", 00:41:38.314 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:38.314 "is_configured": true, 00:41:38.314 "data_offset": 0, 00:41:38.314 "data_size": 65536 00:41:38.314 } 00:41:38.314 ] 00:41:38.314 }' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:41:38.314 16:18:42 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:41:38.572 [2024-07-22 16:18:42.695220] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:41:38.572 [2024-07-22 16:18:42.788084] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d096f0 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:38.572 16:18:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:38.831 16:18:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:38.831 "name": "raid_bdev1", 00:41:38.831 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:38.831 "strip_size_kb": 0, 00:41:38.831 "state": "online", 00:41:38.831 "raid_level": "raid1", 00:41:38.831 "superblock": false, 00:41:38.831 "num_base_bdevs": 4, 00:41:38.831 "num_base_bdevs_discovered": 3, 00:41:38.831 "num_base_bdevs_operational": 3, 00:41:38.831 "process": { 00:41:38.831 "type": "rebuild", 00:41:38.831 "target": "spare", 00:41:38.831 "progress": { 00:41:38.831 "blocks": 36864, 00:41:38.831 "percent": 56 00:41:38.831 } 00:41:38.831 }, 00:41:38.831 "base_bdevs_list": [ 00:41:38.831 { 00:41:38.831 "name": "spare", 00:41:38.831 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:38.831 "is_configured": true, 00:41:38.831 "data_offset": 0, 00:41:38.831 "data_size": 65536 00:41:38.831 }, 00:41:38.831 { 00:41:38.831 "name": null, 00:41:38.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:38.831 "is_configured": false, 00:41:38.831 "data_offset": 0, 00:41:38.831 "data_size": 65536 00:41:38.831 }, 00:41:38.831 { 00:41:38.831 "name": "BaseBdev3", 00:41:38.831 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:38.831 "is_configured": true, 00:41:38.831 "data_offset": 0, 00:41:38.831 "data_size": 65536 00:41:38.831 }, 00:41:38.831 { 00:41:38.831 "name": "BaseBdev4", 00:41:38.831 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:38.831 "is_configured": true, 00:41:38.831 "data_offset": 0, 00:41:38.831 "data_size": 65536 00:41:38.831 } 00:41:38.831 ] 00:41:38.831 }' 00:41:38.831 16:18:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:38.831 16:18:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@657 -- # local timeout=488 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:39.089 "name": "raid_bdev1", 00:41:39.089 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:39.089 "strip_size_kb": 0, 00:41:39.089 "state": "online", 00:41:39.089 "raid_level": "raid1", 00:41:39.089 "superblock": false, 00:41:39.089 "num_base_bdevs": 4, 00:41:39.089 "num_base_bdevs_discovered": 3, 00:41:39.089 "num_base_bdevs_operational": 3, 00:41:39.089 "process": { 00:41:39.089 "type": "rebuild", 00:41:39.089 "target": "spare", 00:41:39.089 "progress": { 00:41:39.089 "blocks": 43008, 00:41:39.089 "percent": 65 00:41:39.089 } 00:41:39.089 }, 00:41:39.089 "base_bdevs_list": [ 00:41:39.089 { 00:41:39.089 "name": "spare", 00:41:39.089 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:39.089 "is_configured": true, 00:41:39.089 "data_offset": 0, 00:41:39.089 "data_size": 65536 00:41:39.089 }, 00:41:39.089 { 00:41:39.089 "name": null, 00:41:39.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:39.089 "is_configured": false, 00:41:39.089 "data_offset": 0, 00:41:39.089 "data_size": 65536 00:41:39.089 }, 00:41:39.089 { 00:41:39.089 "name": "BaseBdev3", 00:41:39.089 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:39.089 "is_configured": true, 00:41:39.089 "data_offset": 0, 00:41:39.089 "data_size": 65536 00:41:39.089 }, 00:41:39.089 { 00:41:39.089 "name": "BaseBdev4", 00:41:39.089 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:39.089 "is_configured": true, 00:41:39.089 "data_offset": 0, 00:41:39.089 "data_size": 65536 00:41:39.089 } 00:41:39.089 ] 00:41:39.089 }' 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:41:39.089 16:18:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:40.464 [2024-07-22 16:18:44.403550] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:41:40.464 [2024-07-22 16:18:44.403674] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:41:40.464 [2024-07-22 16:18:44.403741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:40.464 "name": "raid_bdev1", 00:41:40.464 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:40.464 "strip_size_kb": 0, 00:41:40.464 "state": "online", 00:41:40.464 "raid_level": "raid1", 00:41:40.464 "superblock": false, 00:41:40.464 "num_base_bdevs": 4, 00:41:40.464 "num_base_bdevs_discovered": 3, 00:41:40.464 "num_base_bdevs_operational": 3, 00:41:40.464 "base_bdevs_list": [ 00:41:40.464 { 00:41:40.464 "name": "spare", 00:41:40.464 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:40.464 "is_configured": true, 00:41:40.464 "data_offset": 0, 00:41:40.464 "data_size": 65536 00:41:40.464 }, 00:41:40.464 { 00:41:40.464 "name": null, 00:41:40.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:40.464 "is_configured": false, 00:41:40.464 "data_offset": 0, 00:41:40.464 "data_size": 65536 00:41:40.464 }, 00:41:40.464 { 00:41:40.464 "name": "BaseBdev3", 00:41:40.464 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:40.464 "is_configured": true, 00:41:40.464 "data_offset": 0, 00:41:40.464 "data_size": 65536 00:41:40.464 }, 00:41:40.464 { 00:41:40.464 "name": "BaseBdev4", 00:41:40.464 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:40.464 "is_configured": true, 00:41:40.464 "data_offset": 0, 00:41:40.464 "data_size": 65536 00:41:40.464 } 00:41:40.464 ] 00:41:40.464 }' 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@660 -- # break 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:40.464 16:18:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:41:40.723 "name": "raid_bdev1", 00:41:40.723 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:40.723 "strip_size_kb": 0, 00:41:40.723 "state": "online", 00:41:40.723 "raid_level": "raid1", 00:41:40.723 "superblock": false, 00:41:40.723 "num_base_bdevs": 4, 00:41:40.723 "num_base_bdevs_discovered": 3, 00:41:40.723 "num_base_bdevs_operational": 3, 00:41:40.723 "base_bdevs_list": [ 00:41:40.723 { 00:41:40.723 "name": "spare", 00:41:40.723 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:40.723 "is_configured": true, 00:41:40.723 "data_offset": 0, 00:41:40.723 "data_size": 65536 00:41:40.723 }, 00:41:40.723 { 00:41:40.723 "name": null, 00:41:40.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:40.723 "is_configured": false, 00:41:40.723 "data_offset": 0, 00:41:40.723 "data_size": 65536 00:41:40.723 }, 00:41:40.723 { 00:41:40.723 "name": "BaseBdev3", 00:41:40.723 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:40.723 "is_configured": true, 00:41:40.723 "data_offset": 0, 00:41:40.723 "data_size": 65536 00:41:40.723 }, 00:41:40.723 { 00:41:40.723 "name": "BaseBdev4", 00:41:40.723 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:40.723 "is_configured": true, 00:41:40.723 "data_offset": 0, 00:41:40.723 "data_size": 65536 00:41:40.723 } 00:41:40.723 ] 00:41:40.723 }' 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:40.723 16:18:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:40.981 16:18:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:40.981 "name": "raid_bdev1", 00:41:40.981 "uuid": "da808fbc-4571-4d39-846a-70d17b2f4633", 00:41:40.981 "strip_size_kb": 0, 00:41:40.981 "state": "online", 00:41:40.981 "raid_level": "raid1", 00:41:40.981 "superblock": false, 00:41:40.981 "num_base_bdevs": 4, 00:41:40.981 "num_base_bdevs_discovered": 3, 00:41:40.981 "num_base_bdevs_operational": 3, 00:41:40.981 "base_bdevs_list": [ 00:41:40.981 { 00:41:40.981 "name": "spare", 00:41:40.981 "uuid": "9f170051-deea-5f83-9113-1cc4cb540bca", 00:41:40.981 "is_configured": true, 00:41:40.981 "data_offset": 0, 00:41:40.981 "data_size": 65536 00:41:40.981 }, 00:41:40.981 { 00:41:40.981 "name": null, 00:41:40.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:40.981 "is_configured": false, 00:41:40.981 "data_offset": 0, 00:41:40.981 "data_size": 65536 00:41:40.981 }, 00:41:40.981 { 00:41:40.981 "name": "BaseBdev3", 00:41:40.981 "uuid": "da388ca3-7d2b-4168-a849-bf396b82e2a1", 00:41:40.981 "is_configured": true, 00:41:40.981 "data_offset": 0, 00:41:40.981 "data_size": 65536 00:41:40.981 }, 00:41:40.981 { 00:41:40.981 "name": "BaseBdev4", 00:41:40.981 "uuid": "902e4e85-7c54-4e2b-8cd4-24d03dd9d9b4", 00:41:40.981 "is_configured": true, 00:41:40.981 "data_offset": 0, 00:41:40.981 "data_size": 65536 00:41:40.981 } 00:41:40.981 ] 00:41:40.981 }' 00:41:40.981 16:18:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:40.981 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:41:41.239 16:18:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:41:41.805 [2024-07-22 16:18:45.789688] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:41:41.805 [2024-07-22 16:18:45.789773] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:41:41.805 [2024-07-22 16:18:45.789888] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:41:41.805 [2024-07-22 16:18:45.789996] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:41:41.805 [2024-07-22 16:18:45.790074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:41:41.805 16:18:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:41.805 16:18:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:41:42.063 16:18:46 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:41:42.063 16:18:46 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:41:42.063 16:18:46 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@12 -- # local i 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:42.063 16:18:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:41:42.321 /dev/nbd0 00:41:42.321 16:18:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:42.321 16:18:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:42.321 16:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:41:42.321 16:18:46 -- common/autotest_common.sh@857 -- # local i 00:41:42.321 16:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:42.321 16:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:42.321 16:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:41:42.321 16:18:46 -- common/autotest_common.sh@861 -- # break 00:41:42.321 16:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:42.321 16:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:42.322 16:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:42.322 1+0 records in 00:41:42.322 1+0 records out 00:41:42.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250182 s, 16.4 MB/s 00:41:42.322 16:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:42.322 16:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:41:42.322 16:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:42.322 16:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:42.322 16:18:46 -- common/autotest_common.sh@877 -- # return 0 00:41:42.322 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:42.322 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:42.322 16:18:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:41:42.580 /dev/nbd1 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:42.580 16:18:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:41:42.580 16:18:46 -- common/autotest_common.sh@857 -- # local i 00:41:42.580 16:18:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:42.580 16:18:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:42.580 16:18:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:41:42.580 16:18:46 -- common/autotest_common.sh@861 -- # break 00:41:42.580 16:18:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:42.580 16:18:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:42.580 16:18:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:42.580 1+0 records in 00:41:42.580 1+0 records out 00:41:42.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468932 s, 8.7 MB/s 00:41:42.580 16:18:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:42.580 16:18:46 -- common/autotest_common.sh@874 -- # size=4096 00:41:42.580 16:18:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:42.580 16:18:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:42.580 16:18:46 -- common/autotest_common.sh@877 -- # return 0 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:42.580 16:18:46 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:41:42.580 16:18:46 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@51 -- # local i 00:41:42.580 16:18:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:42.581 16:18:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@41 -- # break 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@45 -- # return 0 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@41 -- # break 00:41:43.148 16:18:47 -- bdev/nbd_common.sh@45 -- # return 0 00:41:43.148 16:18:47 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:41:43.148 16:18:47 -- bdev/bdev_raid.sh@709 -- # killprocess 81584 00:41:43.148 16:18:47 -- common/autotest_common.sh@926 -- # '[' -z 81584 ']' 00:41:43.148 16:18:47 -- common/autotest_common.sh@930 -- # kill -0 81584 00:41:43.148 16:18:47 -- common/autotest_common.sh@931 -- # uname 00:41:43.148 16:18:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:41:43.148 16:18:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81584 00:41:43.407 killing process with pid 81584 00:41:43.407 Received shutdown signal, test time was about 60.000000 seconds 00:41:43.407 00:41:43.407 Latency(us) 00:41:43.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:43.407 =================================================================================================================== 00:41:43.407 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:43.407 16:18:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:41:43.407 16:18:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:41:43.407 16:18:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81584' 00:41:43.407 16:18:47 -- common/autotest_common.sh@945 -- # kill 81584 00:41:43.407 16:18:47 -- common/autotest_common.sh@950 -- # wait 81584 00:41:43.407 [2024-07-22 16:18:47.424031] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:41:43.669 [2024-07-22 16:18:47.880073] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:41:45.043 ************************************ 00:41:45.043 END TEST raid_rebuild_test 00:41:45.043 ************************************ 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@711 -- # return 0 00:41:45.043 00:41:45.043 real 0m23.553s 00:41:45.043 user 0m29.748s 00:41:45.043 sys 0m4.688s 00:41:45.043 16:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:45.043 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:41:45.043 16:18:49 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:41:45.043 16:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:41:45.043 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:41:45.043 ************************************ 00:41:45.043 START TEST raid_rebuild_test_sb 00:41:45.043 ************************************ 00:41:45.043 16:18:49 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@544 -- # raid_pid=82106 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@545 -- # waitforlisten 82106 /var/tmp/spdk-raid.sock 00:41:45.043 16:18:49 -- common/autotest_common.sh@819 -- # '[' -z 82106 ']' 00:41:45.043 16:18:49 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:41:45.043 16:18:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:41:45.043 16:18:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:41:45.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:41:45.043 16:18:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:41:45.043 16:18:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:41:45.043 16:18:49 -- common/autotest_common.sh@10 -- # set +x 00:41:45.043 [2024-07-22 16:18:49.314613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:41:45.043 I/O size of 3145728 is greater than zero copy threshold (65536). 00:41:45.043 Zero copy mechanism will not be used. 00:41:45.044 [2024-07-22 16:18:49.315125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82106 ] 00:41:45.302 [2024-07-22 16:18:49.496958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.561 [2024-07-22 16:18:49.770503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:45.819 [2024-07-22 16:18:49.971022] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:41:46.076 16:18:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:41:46.076 16:18:50 -- common/autotest_common.sh@852 -- # return 0 00:41:46.076 16:18:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:46.076 16:18:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:46.076 16:18:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:41:46.334 BaseBdev1_malloc 00:41:46.334 16:18:50 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:41:46.592 [2024-07-22 16:18:50.844657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:41:46.592 [2024-07-22 16:18:50.844967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:46.592 [2024-07-22 16:18:50.845069] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:41:46.592 [2024-07-22 16:18:50.845094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:46.592 [2024-07-22 16:18:50.848026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:46.592 [2024-07-22 16:18:50.848074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:41:46.592 BaseBdev1 00:41:46.592 16:18:50 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:46.854 16:18:50 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:46.854 16:18:50 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:41:47.120 BaseBdev2_malloc 00:41:47.120 16:18:51 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:41:47.120 [2024-07-22 16:18:51.381609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:41:47.120 [2024-07-22 16:18:51.382065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:47.120 [2024-07-22 16:18:51.382244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:41:47.120 [2024-07-22 16:18:51.382386] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:47.120 [2024-07-22 16:18:51.385421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:47.120 [2024-07-22 16:18:51.385659] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:41:47.120 BaseBdev2 00:41:47.378 16:18:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:47.378 16:18:51 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:47.378 16:18:51 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:41:47.635 BaseBdev3_malloc 00:41:47.635 16:18:51 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:41:47.635 [2024-07-22 16:18:51.904033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:41:47.635 [2024-07-22 16:18:51.904388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:47.635 [2024-07-22 16:18:51.904459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:41:47.635 [2024-07-22 16:18:51.904482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:47.635 [2024-07-22 16:18:51.907676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:47.893 BaseBdev3 00:41:47.893 [2024-07-22 16:18:51.907948] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:41:47.893 16:18:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:41:47.893 16:18:51 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:41:47.893 16:18:51 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:41:48.152 BaseBdev4_malloc 00:41:48.152 16:18:52 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:41:48.410 [2024-07-22 16:18:52.497998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:41:48.410 [2024-07-22 16:18:52.498406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:48.410 [2024-07-22 16:18:52.498516] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:41:48.410 [2024-07-22 16:18:52.498747] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:48.410 [2024-07-22 16:18:52.502063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:48.410 [2024-07-22 16:18:52.502264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:41:48.410 BaseBdev4 00:41:48.410 16:18:52 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:41:48.668 spare_malloc 00:41:48.668 16:18:52 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:41:48.926 spare_delay 00:41:48.926 16:18:53 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:41:49.183 [2024-07-22 16:18:53.216942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:41:49.183 [2024-07-22 16:18:53.217282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:41:49.183 [2024-07-22 16:18:53.217453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:41:49.183 [2024-07-22 16:18:53.217598] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:41:49.183 [2024-07-22 16:18:53.220612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:41:49.183 [2024-07-22 16:18:53.220666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:41:49.183 spare 00:41:49.183 16:18:53 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:41:49.441 [2024-07-22 16:18:53.461126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:41:49.441 [2024-07-22 16:18:53.464099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:41:49.441 [2024-07-22 16:18:53.464335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:41:49.441 [2024-07-22 16:18:53.464588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:41:49.441 [2024-07-22 16:18:53.465075] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:41:49.441 [2024-07-22 16:18:53.465222] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:41:49.441 [2024-07-22 16:18:53.465448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:41:49.441 [2024-07-22 16:18:53.466062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:41:49.441 [2024-07-22 16:18:53.466202] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:41:49.441 [2024-07-22 16:18:53.466562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:49.441 16:18:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:49.726 16:18:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:49.726 "name": "raid_bdev1", 00:41:49.726 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:41:49.726 "strip_size_kb": 0, 00:41:49.726 "state": "online", 00:41:49.726 "raid_level": "raid1", 00:41:49.726 "superblock": true, 00:41:49.726 "num_base_bdevs": 4, 00:41:49.726 "num_base_bdevs_discovered": 4, 00:41:49.726 "num_base_bdevs_operational": 4, 00:41:49.726 "base_bdevs_list": [ 00:41:49.726 { 00:41:49.726 "name": "BaseBdev1", 00:41:49.726 "uuid": "35cc045d-899e-507c-a523-85de7eb4f348", 00:41:49.726 "is_configured": true, 00:41:49.726 "data_offset": 2048, 00:41:49.726 "data_size": 63488 00:41:49.726 }, 00:41:49.726 { 00:41:49.726 "name": "BaseBdev2", 00:41:49.726 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:41:49.726 "is_configured": true, 00:41:49.726 "data_offset": 2048, 00:41:49.726 "data_size": 63488 00:41:49.726 }, 00:41:49.726 { 00:41:49.726 "name": "BaseBdev3", 00:41:49.726 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:41:49.726 "is_configured": true, 00:41:49.726 "data_offset": 2048, 00:41:49.726 "data_size": 63488 00:41:49.726 }, 00:41:49.726 { 00:41:49.726 "name": "BaseBdev4", 00:41:49.726 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:41:49.726 "is_configured": true, 00:41:49.726 "data_offset": 2048, 00:41:49.726 "data_size": 63488 00:41:49.726 } 00:41:49.726 ] 00:41:49.726 }' 00:41:49.726 16:18:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:49.726 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:41:49.986 16:18:54 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:41:49.986 16:18:54 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:41:50.244 [2024-07-22 16:18:54.281456] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:41:50.244 16:18:54 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:41:50.244 16:18:54 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:41:50.244 16:18:54 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:50.503 16:18:54 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:41:50.503 16:18:54 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:41:50.503 16:18:54 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:41:50.503 16:18:54 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@12 -- # local i 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:50.503 16:18:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:41:50.762 [2024-07-22 16:18:54.829554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:41:50.762 /dev/nbd0 00:41:50.762 16:18:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:50.762 16:18:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:50.762 16:18:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:41:50.762 16:18:54 -- common/autotest_common.sh@857 -- # local i 00:41:50.762 16:18:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:41:50.762 16:18:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:41:50.762 16:18:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:41:50.762 16:18:54 -- common/autotest_common.sh@861 -- # break 00:41:50.762 16:18:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:41:50.762 16:18:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:41:50.762 16:18:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:50.762 1+0 records in 00:41:50.762 1+0 records out 00:41:50.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344313 s, 11.9 MB/s 00:41:50.762 16:18:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:50.762 16:18:54 -- common/autotest_common.sh@874 -- # size=4096 00:41:50.762 16:18:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:41:50.762 16:18:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:41:50.762 16:18:54 -- common/autotest_common.sh@877 -- # return 0 00:41:50.762 16:18:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:50.762 16:18:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:41:50.762 16:18:54 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:41:50.762 16:18:54 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:41:50.762 16:18:54 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:41:58.898 63488+0 records in 00:41:58.898 63488+0 records out 00:41:58.898 32505856 bytes (33 MB, 31 MiB) copied, 7.76245 s, 4.2 MB/s 00:41:58.898 16:19:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@51 -- # local i 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:41:58.898 [2024-07-22 16:19:02.885629] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@41 -- # break 00:41:58.898 16:19:02 -- bdev/nbd_common.sh@45 -- # return 0 00:41:58.898 16:19:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:41:58.898 [2024-07-22 16:19:03.157869] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:41:59.155 16:19:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:41:59.413 16:19:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:41:59.413 "name": "raid_bdev1", 00:41:59.413 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:41:59.413 "strip_size_kb": 0, 00:41:59.413 "state": "online", 00:41:59.413 "raid_level": "raid1", 00:41:59.413 "superblock": true, 00:41:59.413 "num_base_bdevs": 4, 00:41:59.413 "num_base_bdevs_discovered": 3, 00:41:59.413 "num_base_bdevs_operational": 3, 00:41:59.413 "base_bdevs_list": [ 00:41:59.413 { 00:41:59.413 "name": null, 00:41:59.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:41:59.413 "is_configured": false, 00:41:59.413 "data_offset": 2048, 00:41:59.413 "data_size": 63488 00:41:59.413 }, 00:41:59.413 { 00:41:59.413 "name": "BaseBdev2", 00:41:59.413 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:41:59.413 "is_configured": true, 00:41:59.413 "data_offset": 2048, 00:41:59.413 "data_size": 63488 00:41:59.413 }, 00:41:59.413 { 00:41:59.413 "name": "BaseBdev3", 00:41:59.413 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:41:59.413 "is_configured": true, 00:41:59.413 "data_offset": 2048, 00:41:59.413 "data_size": 63488 00:41:59.413 }, 00:41:59.413 { 00:41:59.413 "name": "BaseBdev4", 00:41:59.413 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:41:59.413 "is_configured": true, 00:41:59.413 "data_offset": 2048, 00:41:59.413 "data_size": 63488 00:41:59.413 } 00:41:59.413 ] 00:41:59.413 }' 00:41:59.413 16:19:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:41:59.413 16:19:03 -- common/autotest_common.sh@10 -- # set +x 00:41:59.671 16:19:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:41:59.929 [2024-07-22 16:19:03.998181] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:41:59.929 [2024-07-22 16:19:03.998257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:41:59.929 [2024-07-22 16:19:04.012834] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2db0 00:41:59.929 [2024-07-22 16:19:04.015557] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:41:59.929 16:19:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:00.863 16:19:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.121 16:19:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:01.121 "name": "raid_bdev1", 00:42:01.121 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:01.121 "strip_size_kb": 0, 00:42:01.121 "state": "online", 00:42:01.122 "raid_level": "raid1", 00:42:01.122 "superblock": true, 00:42:01.122 "num_base_bdevs": 4, 00:42:01.122 "num_base_bdevs_discovered": 4, 00:42:01.122 "num_base_bdevs_operational": 4, 00:42:01.122 "process": { 00:42:01.122 "type": "rebuild", 00:42:01.122 "target": "spare", 00:42:01.122 "progress": { 00:42:01.122 "blocks": 24576, 00:42:01.122 "percent": 38 00:42:01.122 } 00:42:01.122 }, 00:42:01.122 "base_bdevs_list": [ 00:42:01.122 { 00:42:01.122 "name": "spare", 00:42:01.122 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:01.122 "is_configured": true, 00:42:01.122 "data_offset": 2048, 00:42:01.122 "data_size": 63488 00:42:01.122 }, 00:42:01.122 { 00:42:01.122 "name": "BaseBdev2", 00:42:01.122 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:42:01.122 "is_configured": true, 00:42:01.122 "data_offset": 2048, 00:42:01.122 "data_size": 63488 00:42:01.122 }, 00:42:01.122 { 00:42:01.122 "name": "BaseBdev3", 00:42:01.122 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:01.122 "is_configured": true, 00:42:01.122 "data_offset": 2048, 00:42:01.122 "data_size": 63488 00:42:01.122 }, 00:42:01.122 { 00:42:01.122 "name": "BaseBdev4", 00:42:01.122 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:01.122 "is_configured": true, 00:42:01.122 "data_offset": 2048, 00:42:01.122 "data_size": 63488 00:42:01.122 } 00:42:01.122 ] 00:42:01.122 }' 00:42:01.122 16:19:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:01.122 16:19:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:01.122 16:19:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:01.122 16:19:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:01.122 16:19:05 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:01.380 [2024-07-22 16:19:05.557411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:01.380 [2024-07-22 16:19:05.630315] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:01.380 [2024-07-22 16:19:05.630753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:01.639 16:19:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:01.897 16:19:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:01.897 "name": "raid_bdev1", 00:42:01.897 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:01.897 "strip_size_kb": 0, 00:42:01.897 "state": "online", 00:42:01.897 "raid_level": "raid1", 00:42:01.897 "superblock": true, 00:42:01.897 "num_base_bdevs": 4, 00:42:01.897 "num_base_bdevs_discovered": 3, 00:42:01.897 "num_base_bdevs_operational": 3, 00:42:01.897 "base_bdevs_list": [ 00:42:01.897 { 00:42:01.897 "name": null, 00:42:01.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:01.897 "is_configured": false, 00:42:01.897 "data_offset": 2048, 00:42:01.897 "data_size": 63488 00:42:01.897 }, 00:42:01.897 { 00:42:01.897 "name": "BaseBdev2", 00:42:01.897 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:42:01.897 "is_configured": true, 00:42:01.897 "data_offset": 2048, 00:42:01.897 "data_size": 63488 00:42:01.897 }, 00:42:01.897 { 00:42:01.897 "name": "BaseBdev3", 00:42:01.897 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:01.897 "is_configured": true, 00:42:01.897 "data_offset": 2048, 00:42:01.897 "data_size": 63488 00:42:01.897 }, 00:42:01.897 { 00:42:01.897 "name": "BaseBdev4", 00:42:01.897 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:01.897 "is_configured": true, 00:42:01.897 "data_offset": 2048, 00:42:01.897 "data_size": 63488 00:42:01.897 } 00:42:01.897 ] 00:42:01.897 }' 00:42:01.897 16:19:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:01.898 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:02.155 16:19:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:02.414 "name": "raid_bdev1", 00:42:02.414 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:02.414 "strip_size_kb": 0, 00:42:02.414 "state": "online", 00:42:02.414 "raid_level": "raid1", 00:42:02.414 "superblock": true, 00:42:02.414 "num_base_bdevs": 4, 00:42:02.414 "num_base_bdevs_discovered": 3, 00:42:02.414 "num_base_bdevs_operational": 3, 00:42:02.414 "base_bdevs_list": [ 00:42:02.414 { 00:42:02.414 "name": null, 00:42:02.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:02.414 "is_configured": false, 00:42:02.414 "data_offset": 2048, 00:42:02.414 "data_size": 63488 00:42:02.414 }, 00:42:02.414 { 00:42:02.414 "name": "BaseBdev2", 00:42:02.414 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:42:02.414 "is_configured": true, 00:42:02.414 "data_offset": 2048, 00:42:02.414 "data_size": 63488 00:42:02.414 }, 00:42:02.414 { 00:42:02.414 "name": "BaseBdev3", 00:42:02.414 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:02.414 "is_configured": true, 00:42:02.414 "data_offset": 2048, 00:42:02.414 "data_size": 63488 00:42:02.414 }, 00:42:02.414 { 00:42:02.414 "name": "BaseBdev4", 00:42:02.414 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:02.414 "is_configured": true, 00:42:02.414 "data_offset": 2048, 00:42:02.414 "data_size": 63488 00:42:02.414 } 00:42:02.414 ] 00:42:02.414 }' 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:02.414 16:19:06 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:02.686 [2024-07-22 16:19:06.777650] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:02.686 [2024-07-22 16:19:06.777907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:02.686 [2024-07-22 16:19:06.791082] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:42:02.686 [2024-07-22 16:19:06.793867] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:02.686 16:19:06 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:42:03.642 16:19:07 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:03.643 16:19:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:03.901 "name": "raid_bdev1", 00:42:03.901 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:03.901 "strip_size_kb": 0, 00:42:03.901 "state": "online", 00:42:03.901 "raid_level": "raid1", 00:42:03.901 "superblock": true, 00:42:03.901 "num_base_bdevs": 4, 00:42:03.901 "num_base_bdevs_discovered": 4, 00:42:03.901 "num_base_bdevs_operational": 4, 00:42:03.901 "process": { 00:42:03.901 "type": "rebuild", 00:42:03.901 "target": "spare", 00:42:03.901 "progress": { 00:42:03.901 "blocks": 24576, 00:42:03.901 "percent": 38 00:42:03.901 } 00:42:03.901 }, 00:42:03.901 "base_bdevs_list": [ 00:42:03.901 { 00:42:03.901 "name": "spare", 00:42:03.901 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:03.901 "is_configured": true, 00:42:03.901 "data_offset": 2048, 00:42:03.901 "data_size": 63488 00:42:03.901 }, 00:42:03.901 { 00:42:03.901 "name": "BaseBdev2", 00:42:03.901 "uuid": "d8eea433-3fde-50a7-904b-d0325de4146f", 00:42:03.901 "is_configured": true, 00:42:03.901 "data_offset": 2048, 00:42:03.901 "data_size": 63488 00:42:03.901 }, 00:42:03.901 { 00:42:03.901 "name": "BaseBdev3", 00:42:03.901 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:03.901 "is_configured": true, 00:42:03.901 "data_offset": 2048, 00:42:03.901 "data_size": 63488 00:42:03.901 }, 00:42:03.901 { 00:42:03.901 "name": "BaseBdev4", 00:42:03.901 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:03.901 "is_configured": true, 00:42:03.901 "data_offset": 2048, 00:42:03.901 "data_size": 63488 00:42:03.901 } 00:42:03.901 ] 00:42:03.901 }' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:42:03.901 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:42:03.901 16:19:08 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:42:04.160 [2024-07-22 16:19:08.323504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:04.160 [2024-07-22 16:19:08.408417] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca2e80 00:42:04.418 16:19:08 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:42:04.418 16:19:08 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:42:04.418 16:19:08 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:04.419 16:19:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:04.677 "name": "raid_bdev1", 00:42:04.677 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:04.677 "strip_size_kb": 0, 00:42:04.677 "state": "online", 00:42:04.677 "raid_level": "raid1", 00:42:04.677 "superblock": true, 00:42:04.677 "num_base_bdevs": 4, 00:42:04.677 "num_base_bdevs_discovered": 3, 00:42:04.677 "num_base_bdevs_operational": 3, 00:42:04.677 "process": { 00:42:04.677 "type": "rebuild", 00:42:04.677 "target": "spare", 00:42:04.677 "progress": { 00:42:04.677 "blocks": 38912, 00:42:04.677 "percent": 61 00:42:04.677 } 00:42:04.677 }, 00:42:04.677 "base_bdevs_list": [ 00:42:04.677 { 00:42:04.677 "name": "spare", 00:42:04.677 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:04.677 "is_configured": true, 00:42:04.677 "data_offset": 2048, 00:42:04.677 "data_size": 63488 00:42:04.677 }, 00:42:04.677 { 00:42:04.677 "name": null, 00:42:04.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:04.677 "is_configured": false, 00:42:04.677 "data_offset": 2048, 00:42:04.677 "data_size": 63488 00:42:04.677 }, 00:42:04.677 { 00:42:04.677 "name": "BaseBdev3", 00:42:04.677 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:04.677 "is_configured": true, 00:42:04.677 "data_offset": 2048, 00:42:04.677 "data_size": 63488 00:42:04.677 }, 00:42:04.677 { 00:42:04.677 "name": "BaseBdev4", 00:42:04.677 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:04.677 "is_configured": true, 00:42:04.677 "data_offset": 2048, 00:42:04.677 "data_size": 63488 00:42:04.677 } 00:42:04.677 ] 00:42:04.677 }' 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@657 -- # local timeout=513 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:04.677 16:19:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:04.936 "name": "raid_bdev1", 00:42:04.936 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:04.936 "strip_size_kb": 0, 00:42:04.936 "state": "online", 00:42:04.936 "raid_level": "raid1", 00:42:04.936 "superblock": true, 00:42:04.936 "num_base_bdevs": 4, 00:42:04.936 "num_base_bdevs_discovered": 3, 00:42:04.936 "num_base_bdevs_operational": 3, 00:42:04.936 "process": { 00:42:04.936 "type": "rebuild", 00:42:04.936 "target": "spare", 00:42:04.936 "progress": { 00:42:04.936 "blocks": 45056, 00:42:04.936 "percent": 70 00:42:04.936 } 00:42:04.936 }, 00:42:04.936 "base_bdevs_list": [ 00:42:04.936 { 00:42:04.936 "name": "spare", 00:42:04.936 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:04.936 "is_configured": true, 00:42:04.936 "data_offset": 2048, 00:42:04.936 "data_size": 63488 00:42:04.936 }, 00:42:04.936 { 00:42:04.936 "name": null, 00:42:04.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:04.936 "is_configured": false, 00:42:04.936 "data_offset": 2048, 00:42:04.936 "data_size": 63488 00:42:04.936 }, 00:42:04.936 { 00:42:04.936 "name": "BaseBdev3", 00:42:04.936 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:04.936 "is_configured": true, 00:42:04.936 "data_offset": 2048, 00:42:04.936 "data_size": 63488 00:42:04.936 }, 00:42:04.936 { 00:42:04.936 "name": "BaseBdev4", 00:42:04.936 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:04.936 "is_configured": true, 00:42:04.936 "data_offset": 2048, 00:42:04.936 "data_size": 63488 00:42:04.936 } 00:42:04.936 ] 00:42:04.936 }' 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:04.936 16:19:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:05.871 [2024-07-22 16:19:09.921791] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:05.871 [2024-07-22 16:19:09.921915] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:05.871 [2024-07-22 16:19:09.922142] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:05.871 16:19:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.140 16:19:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:06.140 "name": "raid_bdev1", 00:42:06.140 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:06.140 "strip_size_kb": 0, 00:42:06.140 "state": "online", 00:42:06.140 "raid_level": "raid1", 00:42:06.141 "superblock": true, 00:42:06.141 "num_base_bdevs": 4, 00:42:06.141 "num_base_bdevs_discovered": 3, 00:42:06.141 "num_base_bdevs_operational": 3, 00:42:06.141 "base_bdevs_list": [ 00:42:06.141 { 00:42:06.141 "name": "spare", 00:42:06.141 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:06.141 "is_configured": true, 00:42:06.141 "data_offset": 2048, 00:42:06.141 "data_size": 63488 00:42:06.141 }, 00:42:06.141 { 00:42:06.141 "name": null, 00:42:06.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.141 "is_configured": false, 00:42:06.141 "data_offset": 2048, 00:42:06.141 "data_size": 63488 00:42:06.141 }, 00:42:06.141 { 00:42:06.141 "name": "BaseBdev3", 00:42:06.141 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:06.141 "is_configured": true, 00:42:06.141 "data_offset": 2048, 00:42:06.141 "data_size": 63488 00:42:06.141 }, 00:42:06.141 { 00:42:06.141 "name": "BaseBdev4", 00:42:06.141 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:06.141 "is_configured": true, 00:42:06.141 "data_offset": 2048, 00:42:06.141 "data_size": 63488 00:42:06.141 } 00:42:06.141 ] 00:42:06.141 }' 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@660 -- # break 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:06.141 16:19:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.398 16:19:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:06.398 "name": "raid_bdev1", 00:42:06.398 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:06.398 "strip_size_kb": 0, 00:42:06.398 "state": "online", 00:42:06.398 "raid_level": "raid1", 00:42:06.398 "superblock": true, 00:42:06.398 "num_base_bdevs": 4, 00:42:06.398 "num_base_bdevs_discovered": 3, 00:42:06.398 "num_base_bdevs_operational": 3, 00:42:06.398 "base_bdevs_list": [ 00:42:06.398 { 00:42:06.398 "name": "spare", 00:42:06.398 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:06.398 "is_configured": true, 00:42:06.398 "data_offset": 2048, 00:42:06.398 "data_size": 63488 00:42:06.398 }, 00:42:06.398 { 00:42:06.398 "name": null, 00:42:06.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.398 "is_configured": false, 00:42:06.398 "data_offset": 2048, 00:42:06.398 "data_size": 63488 00:42:06.398 }, 00:42:06.398 { 00:42:06.398 "name": "BaseBdev3", 00:42:06.398 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:06.398 "is_configured": true, 00:42:06.398 "data_offset": 2048, 00:42:06.398 "data_size": 63488 00:42:06.398 }, 00:42:06.398 { 00:42:06.398 "name": "BaseBdev4", 00:42:06.398 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:06.398 "is_configured": true, 00:42:06.398 "data_offset": 2048, 00:42:06.398 "data_size": 63488 00:42:06.398 } 00:42:06.398 ] 00:42:06.398 }' 00:42:06.398 16:19:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:06.398 16:19:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:06.398 16:19:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:06.656 16:19:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:06.657 "name": "raid_bdev1", 00:42:06.657 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:06.657 "strip_size_kb": 0, 00:42:06.657 "state": "online", 00:42:06.657 "raid_level": "raid1", 00:42:06.657 "superblock": true, 00:42:06.657 "num_base_bdevs": 4, 00:42:06.657 "num_base_bdevs_discovered": 3, 00:42:06.657 "num_base_bdevs_operational": 3, 00:42:06.657 "base_bdevs_list": [ 00:42:06.657 { 00:42:06.657 "name": "spare", 00:42:06.657 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:06.657 "is_configured": true, 00:42:06.657 "data_offset": 2048, 00:42:06.657 "data_size": 63488 00:42:06.657 }, 00:42:06.657 { 00:42:06.657 "name": null, 00:42:06.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:06.657 "is_configured": false, 00:42:06.657 "data_offset": 2048, 00:42:06.657 "data_size": 63488 00:42:06.657 }, 00:42:06.657 { 00:42:06.657 "name": "BaseBdev3", 00:42:06.657 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:06.657 "is_configured": true, 00:42:06.657 "data_offset": 2048, 00:42:06.657 "data_size": 63488 00:42:06.657 }, 00:42:06.657 { 00:42:06.657 "name": "BaseBdev4", 00:42:06.657 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:06.657 "is_configured": true, 00:42:06.657 "data_offset": 2048, 00:42:06.657 "data_size": 63488 00:42:06.657 } 00:42:06.657 ] 00:42:06.657 }' 00:42:06.657 16:19:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:06.657 16:19:10 -- common/autotest_common.sh@10 -- # set +x 00:42:07.223 16:19:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:07.223 [2024-07-22 16:19:11.477025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:07.223 [2024-07-22 16:19:11.477105] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:07.223 [2024-07-22 16:19:11.477229] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:07.223 [2024-07-22 16:19:11.477345] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:07.223 [2024-07-22 16:19:11.477362] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:42:07.481 16:19:11 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:07.481 16:19:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:42:07.739 16:19:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:42:07.739 16:19:11 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:42:07.739 16:19:11 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@12 -- # local i 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:07.739 16:19:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:42:07.739 /dev/nbd0 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:07.998 16:19:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:42:07.998 16:19:12 -- common/autotest_common.sh@857 -- # local i 00:42:07.998 16:19:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:42:07.998 16:19:12 -- common/autotest_common.sh@861 -- # break 00:42:07.998 16:19:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:07.998 1+0 records in 00:42:07.998 1+0 records out 00:42:07.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515174 s, 8.0 MB/s 00:42:07.998 16:19:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:07.998 16:19:12 -- common/autotest_common.sh@874 -- # size=4096 00:42:07.998 16:19:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:07.998 16:19:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:07.998 16:19:12 -- common/autotest_common.sh@877 -- # return 0 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:42:07.998 /dev/nbd1 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:07.998 16:19:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:07.998 16:19:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:42:07.998 16:19:12 -- common/autotest_common.sh@857 -- # local i 00:42:07.998 16:19:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:07.998 16:19:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:42:08.256 16:19:12 -- common/autotest_common.sh@861 -- # break 00:42:08.256 16:19:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:08.256 16:19:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:08.256 16:19:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:08.256 1+0 records in 00:42:08.256 1+0 records out 00:42:08.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375093 s, 10.9 MB/s 00:42:08.256 16:19:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.256 16:19:12 -- common/autotest_common.sh@874 -- # size=4096 00:42:08.256 16:19:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:08.256 16:19:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:08.256 16:19:12 -- common/autotest_common.sh@877 -- # return 0 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:08.256 16:19:12 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:08.256 16:19:12 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@51 -- # local i 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:08.256 16:19:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@41 -- # break 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@45 -- # return 0 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:08.515 16:19:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@41 -- # break 00:42:08.773 16:19:13 -- bdev/nbd_common.sh@45 -- # return 0 00:42:08.773 16:19:13 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:42:08.773 16:19:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:08.773 16:19:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:42:08.773 16:19:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:09.339 [2024-07-22 16:19:13.564527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:09.339 [2024-07-22 16:19:13.564931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:09.339 [2024-07-22 16:19:13.564992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:42:09.339 [2024-07-22 16:19:13.565035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:09.339 [2024-07-22 16:19:13.568301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:09.339 [2024-07-22 16:19:13.568348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:09.339 [2024-07-22 16:19:13.568480] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:09.339 [2024-07-22 16:19:13.568547] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:09.339 BaseBdev1 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@696 -- # continue 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:42:09.339 16:19:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:42:09.595 16:19:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:09.853 [2024-07-22 16:19:14.040810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:09.853 [2024-07-22 16:19:14.041106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:09.853 [2024-07-22 16:19:14.041210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:42:09.853 [2024-07-22 16:19:14.041436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:09.853 [2024-07-22 16:19:14.042099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:09.853 [2024-07-22 16:19:14.042137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:09.853 [2024-07-22 16:19:14.042278] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:42:09.853 [2024-07-22 16:19:14.042299] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:42:09.853 [2024-07-22 16:19:14.042316] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:09.853 [2024-07-22 16:19:14.042351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:42:09.853 [2024-07-22 16:19:14.042459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:09.853 BaseBdev3 00:42:09.853 16:19:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:09.853 16:19:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:42:09.853 16:19:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:42:10.111 16:19:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:10.369 [2024-07-22 16:19:14.504912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:10.369 [2024-07-22 16:19:14.505311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:10.369 [2024-07-22 16:19:14.505397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:42:10.369 [2024-07-22 16:19:14.505651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:10.369 [2024-07-22 16:19:14.506338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:10.369 [2024-07-22 16:19:14.506387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:10.370 [2024-07-22 16:19:14.506511] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:42:10.370 [2024-07-22 16:19:14.506567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:10.370 BaseBdev4 00:42:10.370 16:19:14 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:10.628 16:19:14 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:10.886 [2024-07-22 16:19:14.968994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:10.886 [2024-07-22 16:19:14.969419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:10.886 [2024-07-22 16:19:14.969587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:42:10.886 [2024-07-22 16:19:14.969717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:10.886 [2024-07-22 16:19:14.970527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:10.886 [2024-07-22 16:19:14.970725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:10.886 [2024-07-22 16:19:14.970978] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:42:10.886 [2024-07-22 16:19:14.971158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:10.886 spare 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:10.886 16:19:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:10.886 [2024-07-22 16:19:15.071481] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:42:10.886 [2024-07-22 16:19:15.071782] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:10.886 [2024-07-22 16:19:15.072067] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1530 00:42:10.886 [2024-07-22 16:19:15.072784] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:42:10.886 [2024-07-22 16:19:15.072914] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:42:10.886 [2024-07-22 16:19:15.073363] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:11.145 16:19:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:11.145 "name": "raid_bdev1", 00:42:11.145 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:11.145 "strip_size_kb": 0, 00:42:11.145 "state": "online", 00:42:11.145 "raid_level": "raid1", 00:42:11.145 "superblock": true, 00:42:11.145 "num_base_bdevs": 4, 00:42:11.145 "num_base_bdevs_discovered": 3, 00:42:11.145 "num_base_bdevs_operational": 3, 00:42:11.145 "base_bdevs_list": [ 00:42:11.145 { 00:42:11.145 "name": "spare", 00:42:11.145 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:11.145 "is_configured": true, 00:42:11.145 "data_offset": 2048, 00:42:11.145 "data_size": 63488 00:42:11.145 }, 00:42:11.145 { 00:42:11.145 "name": null, 00:42:11.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:11.145 "is_configured": false, 00:42:11.145 "data_offset": 2048, 00:42:11.145 "data_size": 63488 00:42:11.145 }, 00:42:11.145 { 00:42:11.145 "name": "BaseBdev3", 00:42:11.145 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:11.145 "is_configured": true, 00:42:11.145 "data_offset": 2048, 00:42:11.145 "data_size": 63488 00:42:11.145 }, 00:42:11.145 { 00:42:11.145 "name": "BaseBdev4", 00:42:11.145 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:11.145 "is_configured": true, 00:42:11.145 "data_offset": 2048, 00:42:11.145 "data_size": 63488 00:42:11.145 } 00:42:11.145 ] 00:42:11.145 }' 00:42:11.145 16:19:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:11.145 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:11.403 16:19:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:11.662 "name": "raid_bdev1", 00:42:11.662 "uuid": "6feb5993-e478-4882-900f-bb60a13a727c", 00:42:11.662 "strip_size_kb": 0, 00:42:11.662 "state": "online", 00:42:11.662 "raid_level": "raid1", 00:42:11.662 "superblock": true, 00:42:11.662 "num_base_bdevs": 4, 00:42:11.662 "num_base_bdevs_discovered": 3, 00:42:11.662 "num_base_bdevs_operational": 3, 00:42:11.662 "base_bdevs_list": [ 00:42:11.662 { 00:42:11.662 "name": "spare", 00:42:11.662 "uuid": "eb8b4418-a3cc-5ba4-be0b-322e4909222c", 00:42:11.662 "is_configured": true, 00:42:11.662 "data_offset": 2048, 00:42:11.662 "data_size": 63488 00:42:11.662 }, 00:42:11.662 { 00:42:11.662 "name": null, 00:42:11.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:11.662 "is_configured": false, 00:42:11.662 "data_offset": 2048, 00:42:11.662 "data_size": 63488 00:42:11.662 }, 00:42:11.662 { 00:42:11.662 "name": "BaseBdev3", 00:42:11.662 "uuid": "67c5b267-6487-5de9-bb4f-8abe1235fcb6", 00:42:11.662 "is_configured": true, 00:42:11.662 "data_offset": 2048, 00:42:11.662 "data_size": 63488 00:42:11.662 }, 00:42:11.662 { 00:42:11.662 "name": "BaseBdev4", 00:42:11.662 "uuid": "c7686cf1-7110-5eaf-a2e6-00de3445f6d0", 00:42:11.662 "is_configured": true, 00:42:11.662 "data_offset": 2048, 00:42:11.662 "data_size": 63488 00:42:11.662 } 00:42:11.662 ] 00:42:11.662 }' 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:11.662 16:19:15 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:11.920 16:19:16 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:42:11.920 16:19:16 -- bdev/bdev_raid.sh@709 -- # killprocess 82106 00:42:11.920 16:19:16 -- common/autotest_common.sh@926 -- # '[' -z 82106 ']' 00:42:11.920 16:19:16 -- common/autotest_common.sh@930 -- # kill -0 82106 00:42:11.920 16:19:16 -- common/autotest_common.sh@931 -- # uname 00:42:11.920 16:19:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:11.920 16:19:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82106 00:42:11.920 killing process with pid 82106 00:42:11.920 Received shutdown signal, test time was about 60.000000 seconds 00:42:11.920 00:42:11.920 Latency(us) 00:42:11.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:11.920 =================================================================================================================== 00:42:11.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:11.920 16:19:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:11.920 16:19:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:11.920 16:19:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82106' 00:42:11.920 16:19:16 -- common/autotest_common.sh@945 -- # kill 82106 00:42:11.920 16:19:16 -- common/autotest_common.sh@950 -- # wait 82106 00:42:11.920 [2024-07-22 16:19:16.050505] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:11.920 [2024-07-22 16:19:16.050646] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:11.920 [2024-07-22 16:19:16.050764] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:11.920 [2024-07-22 16:19:16.050785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:42:12.487 [2024-07-22 16:19:16.515354] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@711 -- # return 0 00:42:13.865 00:42:13.865 real 0m28.614s 00:42:13.865 user 0m38.143s 00:42:13.865 sys 0m5.084s 00:42:13.865 16:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:13.865 ************************************ 00:42:13.865 END TEST raid_rebuild_test_sb 00:42:13.865 ************************************ 00:42:13.865 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:42:13.865 16:19:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:42:13.865 16:19:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:13.865 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:42:13.865 ************************************ 00:42:13.865 START TEST raid_rebuild_test_io 00:42:13.865 ************************************ 00:42:13.865 16:19:17 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@544 -- # raid_pid=82731 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@545 -- # waitforlisten 82731 /var/tmp/spdk-raid.sock 00:42:13.865 16:19:17 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:13.865 16:19:17 -- common/autotest_common.sh@819 -- # '[' -z 82731 ']' 00:42:13.865 16:19:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:13.865 16:19:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:13.865 16:19:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:13.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:13.865 16:19:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:13.865 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:42:13.865 [2024-07-22 16:19:17.992737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:42:13.865 [2024-07-22 16:19:17.992976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82731 ] 00:42:13.865 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:13.865 Zero copy mechanism will not be used. 00:42:14.124 [2024-07-22 16:19:18.175837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.382 [2024-07-22 16:19:18.448268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:14.640 [2024-07-22 16:19:18.675769] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:14.898 16:19:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:14.898 16:19:18 -- common/autotest_common.sh@852 -- # return 0 00:42:14.898 16:19:18 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:14.898 16:19:18 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:14.898 16:19:18 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:42:15.156 BaseBdev1 00:42:15.156 16:19:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:15.156 16:19:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:15.156 16:19:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:42:15.414 BaseBdev2 00:42:15.414 16:19:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:15.414 16:19:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:15.414 16:19:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:42:15.672 BaseBdev3 00:42:15.672 16:19:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:15.672 16:19:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:42:15.672 16:19:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:42:15.930 BaseBdev4 00:42:15.930 16:19:20 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:42:16.187 spare_malloc 00:42:16.187 16:19:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:16.446 spare_delay 00:42:16.446 16:19:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:16.704 [2024-07-22 16:19:20.792744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:16.704 [2024-07-22 16:19:20.792891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:16.704 [2024-07-22 16:19:20.792949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:42:16.704 [2024-07-22 16:19:20.792969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:16.704 [2024-07-22 16:19:20.796174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:16.704 [2024-07-22 16:19:20.796227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:16.704 spare 00:42:16.704 16:19:20 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:42:16.961 [2024-07-22 16:19:21.020917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:16.961 [2024-07-22 16:19:21.023631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:16.961 [2024-07-22 16:19:21.023718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:16.961 [2024-07-22 16:19:21.023781] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:16.961 [2024-07-22 16:19:21.023891] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:42:16.961 [2024-07-22 16:19:21.023926] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:42:16.961 [2024-07-22 16:19:21.024172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:42:16.961 [2024-07-22 16:19:21.024687] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:42:16.961 [2024-07-22 16:19:21.024730] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:42:16.961 [2024-07-22 16:19:21.025120] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:16.961 16:19:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:17.219 16:19:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:17.219 "name": "raid_bdev1", 00:42:17.219 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:17.219 "strip_size_kb": 0, 00:42:17.219 "state": "online", 00:42:17.219 "raid_level": "raid1", 00:42:17.219 "superblock": false, 00:42:17.219 "num_base_bdevs": 4, 00:42:17.219 "num_base_bdevs_discovered": 4, 00:42:17.219 "num_base_bdevs_operational": 4, 00:42:17.219 "base_bdevs_list": [ 00:42:17.219 { 00:42:17.219 "name": "BaseBdev1", 00:42:17.219 "uuid": "2d1f7b4b-e46e-477c-b565-09cfbab62e80", 00:42:17.219 "is_configured": true, 00:42:17.219 "data_offset": 0, 00:42:17.219 "data_size": 65536 00:42:17.219 }, 00:42:17.219 { 00:42:17.219 "name": "BaseBdev2", 00:42:17.219 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:17.219 "is_configured": true, 00:42:17.219 "data_offset": 0, 00:42:17.219 "data_size": 65536 00:42:17.219 }, 00:42:17.219 { 00:42:17.219 "name": "BaseBdev3", 00:42:17.219 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:17.219 "is_configured": true, 00:42:17.219 "data_offset": 0, 00:42:17.219 "data_size": 65536 00:42:17.219 }, 00:42:17.219 { 00:42:17.219 "name": "BaseBdev4", 00:42:17.219 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:17.219 "is_configured": true, 00:42:17.219 "data_offset": 0, 00:42:17.219 "data_size": 65536 00:42:17.219 } 00:42:17.219 ] 00:42:17.219 }' 00:42:17.219 16:19:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:17.219 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:42:17.477 16:19:21 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:17.477 16:19:21 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:42:17.735 [2024-07-22 16:19:21.921698] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:17.735 16:19:21 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:42:17.735 16:19:21 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:17.735 16:19:21 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:17.997 16:19:22 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:42:17.997 16:19:22 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:42:17.997 16:19:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:42:17.997 16:19:22 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:42:18.255 [2024-07-22 16:19:22.282980] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:42:18.255 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:18.255 Zero copy mechanism will not be used. 00:42:18.255 Running I/O for 60 seconds... 00:42:18.255 [2024-07-22 16:19:22.379385] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:18.255 [2024-07-22 16:19:22.394532] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:18.255 16:19:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:18.514 16:19:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:18.514 "name": "raid_bdev1", 00:42:18.514 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:18.514 "strip_size_kb": 0, 00:42:18.514 "state": "online", 00:42:18.514 "raid_level": "raid1", 00:42:18.514 "superblock": false, 00:42:18.514 "num_base_bdevs": 4, 00:42:18.514 "num_base_bdevs_discovered": 3, 00:42:18.514 "num_base_bdevs_operational": 3, 00:42:18.514 "base_bdevs_list": [ 00:42:18.514 { 00:42:18.514 "name": null, 00:42:18.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:18.514 "is_configured": false, 00:42:18.514 "data_offset": 0, 00:42:18.514 "data_size": 65536 00:42:18.514 }, 00:42:18.514 { 00:42:18.514 "name": "BaseBdev2", 00:42:18.514 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:18.514 "is_configured": true, 00:42:18.514 "data_offset": 0, 00:42:18.514 "data_size": 65536 00:42:18.514 }, 00:42:18.514 { 00:42:18.514 "name": "BaseBdev3", 00:42:18.514 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:18.514 "is_configured": true, 00:42:18.514 "data_offset": 0, 00:42:18.514 "data_size": 65536 00:42:18.514 }, 00:42:18.514 { 00:42:18.514 "name": "BaseBdev4", 00:42:18.514 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:18.514 "is_configured": true, 00:42:18.514 "data_offset": 0, 00:42:18.514 "data_size": 65536 00:42:18.514 } 00:42:18.514 ] 00:42:18.514 }' 00:42:18.514 16:19:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:18.514 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:42:19.088 16:19:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:19.088 [2024-07-22 16:19:23.296870] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:19.088 [2024-07-22 16:19:23.296959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:19.088 16:19:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:42:19.357 [2024-07-22 16:19:23.365938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:42:19.357 [2024-07-22 16:19:23.368862] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:19.357 [2024-07-22 16:19:23.482412] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:19.357 [2024-07-22 16:19:23.483454] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:19.616 [2024-07-22 16:19:23.738615] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:19.616 [2024-07-22 16:19:23.739685] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:20.198 16:19:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:20.456 "name": "raid_bdev1", 00:42:20.456 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:20.456 "strip_size_kb": 0, 00:42:20.456 "state": "online", 00:42:20.456 "raid_level": "raid1", 00:42:20.456 "superblock": false, 00:42:20.456 "num_base_bdevs": 4, 00:42:20.456 "num_base_bdevs_discovered": 4, 00:42:20.456 "num_base_bdevs_operational": 4, 00:42:20.456 "process": { 00:42:20.456 "type": "rebuild", 00:42:20.456 "target": "spare", 00:42:20.456 "progress": { 00:42:20.456 "blocks": 14336, 00:42:20.456 "percent": 21 00:42:20.456 } 00:42:20.456 }, 00:42:20.456 "base_bdevs_list": [ 00:42:20.456 { 00:42:20.456 "name": "spare", 00:42:20.456 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:20.456 "is_configured": true, 00:42:20.456 "data_offset": 0, 00:42:20.456 "data_size": 65536 00:42:20.456 }, 00:42:20.456 { 00:42:20.456 "name": "BaseBdev2", 00:42:20.456 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:20.456 "is_configured": true, 00:42:20.456 "data_offset": 0, 00:42:20.456 "data_size": 65536 00:42:20.456 }, 00:42:20.456 { 00:42:20.456 "name": "BaseBdev3", 00:42:20.456 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:20.456 "is_configured": true, 00:42:20.456 "data_offset": 0, 00:42:20.456 "data_size": 65536 00:42:20.456 }, 00:42:20.456 { 00:42:20.456 "name": "BaseBdev4", 00:42:20.456 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:20.456 "is_configured": true, 00:42:20.456 "data_offset": 0, 00:42:20.456 "data_size": 65536 00:42:20.456 } 00:42:20.456 ] 00:42:20.456 }' 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:20.456 16:19:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:20.456 [2024-07-22 16:19:24.661415] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:20.456 [2024-07-22 16:19:24.662324] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:20.713 [2024-07-22 16:19:24.891592] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:20.971 [2024-07-22 16:19:25.031632] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:20.971 [2024-07-22 16:19:25.047080] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:20.971 [2024-07-22 16:19:25.080035] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:20.971 16:19:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:21.229 16:19:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:21.229 "name": "raid_bdev1", 00:42:21.229 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:21.229 "strip_size_kb": 0, 00:42:21.229 "state": "online", 00:42:21.229 "raid_level": "raid1", 00:42:21.229 "superblock": false, 00:42:21.229 "num_base_bdevs": 4, 00:42:21.229 "num_base_bdevs_discovered": 3, 00:42:21.229 "num_base_bdevs_operational": 3, 00:42:21.229 "base_bdevs_list": [ 00:42:21.229 { 00:42:21.229 "name": null, 00:42:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:21.229 "is_configured": false, 00:42:21.229 "data_offset": 0, 00:42:21.229 "data_size": 65536 00:42:21.229 }, 00:42:21.229 { 00:42:21.229 "name": "BaseBdev2", 00:42:21.229 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:21.229 "is_configured": true, 00:42:21.229 "data_offset": 0, 00:42:21.229 "data_size": 65536 00:42:21.229 }, 00:42:21.229 { 00:42:21.229 "name": "BaseBdev3", 00:42:21.229 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:21.229 "is_configured": true, 00:42:21.229 "data_offset": 0, 00:42:21.229 "data_size": 65536 00:42:21.229 }, 00:42:21.229 { 00:42:21.229 "name": "BaseBdev4", 00:42:21.229 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:21.229 "is_configured": true, 00:42:21.229 "data_offset": 0, 00:42:21.229 "data_size": 65536 00:42:21.229 } 00:42:21.229 ] 00:42:21.229 }' 00:42:21.229 16:19:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:21.229 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:21.795 16:19:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:22.053 "name": "raid_bdev1", 00:42:22.053 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:22.053 "strip_size_kb": 0, 00:42:22.053 "state": "online", 00:42:22.053 "raid_level": "raid1", 00:42:22.053 "superblock": false, 00:42:22.053 "num_base_bdevs": 4, 00:42:22.053 "num_base_bdevs_discovered": 3, 00:42:22.053 "num_base_bdevs_operational": 3, 00:42:22.053 "base_bdevs_list": [ 00:42:22.053 { 00:42:22.053 "name": null, 00:42:22.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:22.053 "is_configured": false, 00:42:22.053 "data_offset": 0, 00:42:22.053 "data_size": 65536 00:42:22.053 }, 00:42:22.053 { 00:42:22.053 "name": "BaseBdev2", 00:42:22.053 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:22.053 "is_configured": true, 00:42:22.053 "data_offset": 0, 00:42:22.053 "data_size": 65536 00:42:22.053 }, 00:42:22.053 { 00:42:22.053 "name": "BaseBdev3", 00:42:22.053 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:22.053 "is_configured": true, 00:42:22.053 "data_offset": 0, 00:42:22.053 "data_size": 65536 00:42:22.053 }, 00:42:22.053 { 00:42:22.053 "name": "BaseBdev4", 00:42:22.053 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:22.053 "is_configured": true, 00:42:22.053 "data_offset": 0, 00:42:22.053 "data_size": 65536 00:42:22.053 } 00:42:22.053 ] 00:42:22.053 }' 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:22.053 16:19:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:22.053 [2024-07-22 16:19:26.319553] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:22.053 [2024-07-22 16:19:26.319658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:22.312 16:19:26 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:42:22.312 [2024-07-22 16:19:26.387149] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:42:22.312 [2024-07-22 16:19:26.390311] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:22.312 [2024-07-22 16:19:26.492880] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:22.312 [2024-07-22 16:19:26.493884] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:22.570 [2024-07-22 16:19:26.647838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:22.570 [2024-07-22 16:19:26.648848] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:22.835 [2024-07-22 16:19:27.041370] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:23.095 [2024-07-22 16:19:27.276846] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:23.353 16:19:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:23.612 [2024-07-22 16:19:27.655650] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:23.612 [2024-07-22 16:19:27.656554] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:23.612 "name": "raid_bdev1", 00:42:23.612 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:23.612 "strip_size_kb": 0, 00:42:23.612 "state": "online", 00:42:23.612 "raid_level": "raid1", 00:42:23.612 "superblock": false, 00:42:23.612 "num_base_bdevs": 4, 00:42:23.612 "num_base_bdevs_discovered": 4, 00:42:23.612 "num_base_bdevs_operational": 4, 00:42:23.612 "process": { 00:42:23.612 "type": "rebuild", 00:42:23.612 "target": "spare", 00:42:23.612 "progress": { 00:42:23.612 "blocks": 12288, 00:42:23.612 "percent": 18 00:42:23.612 } 00:42:23.612 }, 00:42:23.612 "base_bdevs_list": [ 00:42:23.612 { 00:42:23.612 "name": "spare", 00:42:23.612 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:23.612 "is_configured": true, 00:42:23.612 "data_offset": 0, 00:42:23.612 "data_size": 65536 00:42:23.612 }, 00:42:23.612 { 00:42:23.612 "name": "BaseBdev2", 00:42:23.612 "uuid": "525606e0-777e-4208-baab-a7e2807b89d6", 00:42:23.612 "is_configured": true, 00:42:23.612 "data_offset": 0, 00:42:23.612 "data_size": 65536 00:42:23.612 }, 00:42:23.612 { 00:42:23.612 "name": "BaseBdev3", 00:42:23.612 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:23.612 "is_configured": true, 00:42:23.612 "data_offset": 0, 00:42:23.612 "data_size": 65536 00:42:23.612 }, 00:42:23.612 { 00:42:23.612 "name": "BaseBdev4", 00:42:23.612 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:23.612 "is_configured": true, 00:42:23.612 "data_offset": 0, 00:42:23.612 "data_size": 65536 00:42:23.612 } 00:42:23.612 ] 00:42:23.612 }' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:42:23.612 16:19:27 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:42:23.612 [2024-07-22 16:19:27.861951] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:23.612 [2024-07-22 16:19:27.862477] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:23.870 [2024-07-22 16:19:27.920581] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:23.871 [2024-07-22 16:19:28.079074] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:42:23.871 [2024-07-22 16:19:28.079168] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:23.871 16:19:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:24.129 16:19:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:24.129 "name": "raid_bdev1", 00:42:24.129 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:24.129 "strip_size_kb": 0, 00:42:24.129 "state": "online", 00:42:24.129 "raid_level": "raid1", 00:42:24.129 "superblock": false, 00:42:24.129 "num_base_bdevs": 4, 00:42:24.129 "num_base_bdevs_discovered": 3, 00:42:24.129 "num_base_bdevs_operational": 3, 00:42:24.129 "process": { 00:42:24.129 "type": "rebuild", 00:42:24.129 "target": "spare", 00:42:24.129 "progress": { 00:42:24.129 "blocks": 22528, 00:42:24.129 "percent": 34 00:42:24.129 } 00:42:24.129 }, 00:42:24.129 "base_bdevs_list": [ 00:42:24.129 { 00:42:24.129 "name": "spare", 00:42:24.129 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:24.129 "is_configured": true, 00:42:24.129 "data_offset": 0, 00:42:24.129 "data_size": 65536 00:42:24.129 }, 00:42:24.129 { 00:42:24.129 "name": null, 00:42:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:24.129 "is_configured": false, 00:42:24.129 "data_offset": 0, 00:42:24.129 "data_size": 65536 00:42:24.129 }, 00:42:24.129 { 00:42:24.129 "name": "BaseBdev3", 00:42:24.129 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:24.129 "is_configured": true, 00:42:24.129 "data_offset": 0, 00:42:24.129 "data_size": 65536 00:42:24.129 }, 00:42:24.129 { 00:42:24.129 "name": "BaseBdev4", 00:42:24.129 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:24.129 "is_configured": true, 00:42:24.129 "data_offset": 0, 00:42:24.129 "data_size": 65536 00:42:24.129 } 00:42:24.129 ] 00:42:24.129 }' 00:42:24.129 16:19:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@657 -- # local timeout=533 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:24.448 16:19:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:24.707 "name": "raid_bdev1", 00:42:24.707 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:24.707 "strip_size_kb": 0, 00:42:24.707 "state": "online", 00:42:24.707 "raid_level": "raid1", 00:42:24.707 "superblock": false, 00:42:24.707 "num_base_bdevs": 4, 00:42:24.707 "num_base_bdevs_discovered": 3, 00:42:24.707 "num_base_bdevs_operational": 3, 00:42:24.707 "process": { 00:42:24.707 "type": "rebuild", 00:42:24.707 "target": "spare", 00:42:24.707 "progress": { 00:42:24.707 "blocks": 28672, 00:42:24.707 "percent": 43 00:42:24.707 } 00:42:24.707 }, 00:42:24.707 "base_bdevs_list": [ 00:42:24.707 { 00:42:24.707 "name": "spare", 00:42:24.707 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:24.707 "is_configured": true, 00:42:24.707 "data_offset": 0, 00:42:24.707 "data_size": 65536 00:42:24.707 }, 00:42:24.707 { 00:42:24.707 "name": null, 00:42:24.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:24.707 "is_configured": false, 00:42:24.707 "data_offset": 0, 00:42:24.707 "data_size": 65536 00:42:24.707 }, 00:42:24.707 { 00:42:24.707 "name": "BaseBdev3", 00:42:24.707 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:24.707 "is_configured": true, 00:42:24.707 "data_offset": 0, 00:42:24.707 "data_size": 65536 00:42:24.707 }, 00:42:24.707 { 00:42:24.707 "name": "BaseBdev4", 00:42:24.707 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:24.707 "is_configured": true, 00:42:24.707 "data_offset": 0, 00:42:24.707 "data_size": 65536 00:42:24.707 } 00:42:24.707 ] 00:42:24.707 }' 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:24.707 16:19:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:24.707 [2024-07-22 16:19:28.929451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:42:25.273 [2024-07-22 16:19:29.423315] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:42:25.531 [2024-07-22 16:19:29.767808] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:25.531 16:19:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:25.789 16:19:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:25.789 16:19:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:26.047 16:19:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:26.047 "name": "raid_bdev1", 00:42:26.047 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:26.047 "strip_size_kb": 0, 00:42:26.047 "state": "online", 00:42:26.047 "raid_level": "raid1", 00:42:26.047 "superblock": false, 00:42:26.047 "num_base_bdevs": 4, 00:42:26.048 "num_base_bdevs_discovered": 3, 00:42:26.048 "num_base_bdevs_operational": 3, 00:42:26.048 "process": { 00:42:26.048 "type": "rebuild", 00:42:26.048 "target": "spare", 00:42:26.048 "progress": { 00:42:26.048 "blocks": 51200, 00:42:26.048 "percent": 78 00:42:26.048 } 00:42:26.048 }, 00:42:26.048 "base_bdevs_list": [ 00:42:26.048 { 00:42:26.048 "name": "spare", 00:42:26.048 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:26.048 "is_configured": true, 00:42:26.048 "data_offset": 0, 00:42:26.048 "data_size": 65536 00:42:26.048 }, 00:42:26.048 { 00:42:26.048 "name": null, 00:42:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:26.048 "is_configured": false, 00:42:26.048 "data_offset": 0, 00:42:26.048 "data_size": 65536 00:42:26.048 }, 00:42:26.048 { 00:42:26.048 "name": "BaseBdev3", 00:42:26.048 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:26.048 "is_configured": true, 00:42:26.048 "data_offset": 0, 00:42:26.048 "data_size": 65536 00:42:26.048 }, 00:42:26.048 { 00:42:26.048 "name": "BaseBdev4", 00:42:26.048 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:26.048 "is_configured": true, 00:42:26.048 "data_offset": 0, 00:42:26.048 "data_size": 65536 00:42:26.048 } 00:42:26.048 ] 00:42:26.048 }' 00:42:26.048 16:19:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:26.048 16:19:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:26.048 16:19:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:26.048 16:19:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:26.048 16:19:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:26.615 [2024-07-22 16:19:30.785370] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:26.615 [2024-07-22 16:19:30.858633] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:26.615 [2024-07-22 16:19:30.871206] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:26.873 16:19:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:27.131 16:19:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:27.131 16:19:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.389 16:19:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:27.390 "name": "raid_bdev1", 00:42:27.390 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:27.390 "strip_size_kb": 0, 00:42:27.390 "state": "online", 00:42:27.390 "raid_level": "raid1", 00:42:27.390 "superblock": false, 00:42:27.390 "num_base_bdevs": 4, 00:42:27.390 "num_base_bdevs_discovered": 3, 00:42:27.390 "num_base_bdevs_operational": 3, 00:42:27.390 "base_bdevs_list": [ 00:42:27.390 { 00:42:27.390 "name": "spare", 00:42:27.390 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:27.390 "is_configured": true, 00:42:27.390 "data_offset": 0, 00:42:27.390 "data_size": 65536 00:42:27.390 }, 00:42:27.390 { 00:42:27.390 "name": null, 00:42:27.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.390 "is_configured": false, 00:42:27.390 "data_offset": 0, 00:42:27.390 "data_size": 65536 00:42:27.390 }, 00:42:27.390 { 00:42:27.390 "name": "BaseBdev3", 00:42:27.390 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:27.390 "is_configured": true, 00:42:27.390 "data_offset": 0, 00:42:27.390 "data_size": 65536 00:42:27.390 }, 00:42:27.390 { 00:42:27.390 "name": "BaseBdev4", 00:42:27.390 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:27.390 "is_configured": true, 00:42:27.390 "data_offset": 0, 00:42:27.390 "data_size": 65536 00:42:27.390 } 00:42:27.390 ] 00:42:27.390 }' 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@660 -- # break 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:27.390 16:19:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:27.648 "name": "raid_bdev1", 00:42:27.648 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:27.648 "strip_size_kb": 0, 00:42:27.648 "state": "online", 00:42:27.648 "raid_level": "raid1", 00:42:27.648 "superblock": false, 00:42:27.648 "num_base_bdevs": 4, 00:42:27.648 "num_base_bdevs_discovered": 3, 00:42:27.648 "num_base_bdevs_operational": 3, 00:42:27.648 "base_bdevs_list": [ 00:42:27.648 { 00:42:27.648 "name": "spare", 00:42:27.648 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:27.648 "is_configured": true, 00:42:27.648 "data_offset": 0, 00:42:27.648 "data_size": 65536 00:42:27.648 }, 00:42:27.648 { 00:42:27.648 "name": null, 00:42:27.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.648 "is_configured": false, 00:42:27.648 "data_offset": 0, 00:42:27.648 "data_size": 65536 00:42:27.648 }, 00:42:27.648 { 00:42:27.648 "name": "BaseBdev3", 00:42:27.648 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:27.648 "is_configured": true, 00:42:27.648 "data_offset": 0, 00:42:27.648 "data_size": 65536 00:42:27.648 }, 00:42:27.648 { 00:42:27.648 "name": "BaseBdev4", 00:42:27.648 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:27.648 "is_configured": true, 00:42:27.648 "data_offset": 0, 00:42:27.648 "data_size": 65536 00:42:27.648 } 00:42:27.648 ] 00:42:27.648 }' 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:27.648 16:19:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:27.907 16:19:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:27.907 "name": "raid_bdev1", 00:42:27.907 "uuid": "91e420fd-b081-49d7-881e-e875d694efe8", 00:42:27.907 "strip_size_kb": 0, 00:42:27.907 "state": "online", 00:42:27.907 "raid_level": "raid1", 00:42:27.907 "superblock": false, 00:42:27.907 "num_base_bdevs": 4, 00:42:27.907 "num_base_bdevs_discovered": 3, 00:42:27.907 "num_base_bdevs_operational": 3, 00:42:27.907 "base_bdevs_list": [ 00:42:27.907 { 00:42:27.907 "name": "spare", 00:42:27.907 "uuid": "c5bd8b1b-abcb-5a7f-b07b-31c842331a04", 00:42:27.907 "is_configured": true, 00:42:27.907 "data_offset": 0, 00:42:27.907 "data_size": 65536 00:42:27.907 }, 00:42:27.907 { 00:42:27.907 "name": null, 00:42:27.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:27.907 "is_configured": false, 00:42:27.907 "data_offset": 0, 00:42:27.907 "data_size": 65536 00:42:27.907 }, 00:42:27.907 { 00:42:27.907 "name": "BaseBdev3", 00:42:27.907 "uuid": "36c11d41-cc3a-46f1-9c42-84e75d4661ee", 00:42:27.907 "is_configured": true, 00:42:27.907 "data_offset": 0, 00:42:27.907 "data_size": 65536 00:42:27.907 }, 00:42:27.907 { 00:42:27.907 "name": "BaseBdev4", 00:42:27.907 "uuid": "9dcd389c-1896-44bb-951e-eadd6c52f1ae", 00:42:27.907 "is_configured": true, 00:42:27.907 "data_offset": 0, 00:42:27.907 "data_size": 65536 00:42:27.907 } 00:42:27.907 ] 00:42:27.907 }' 00:42:27.907 16:19:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:27.907 16:19:31 -- common/autotest_common.sh@10 -- # set +x 00:42:28.165 16:19:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:28.433 [2024-07-22 16:19:32.467370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:28.433 [2024-07-22 16:19:32.467432] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:28.433 00:42:28.433 Latency(us) 00:42:28.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:28.433 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:42:28.433 raid_bdev1 : 10.27 89.49 268.47 0.00 0.00 15947.16 320.23 122969.37 00:42:28.433 =================================================================================================================== 00:42:28.433 Total : 89.49 268.47 0.00 0.00 15947.16 320.23 122969.37 00:42:28.433 0 00:42:28.433 [2024-07-22 16:19:32.576560] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:28.433 [2024-07-22 16:19:32.576670] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:28.433 [2024-07-22 16:19:32.576798] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:28.433 [2024-07-22 16:19:32.576823] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:42:28.433 16:19:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:28.433 16:19:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:42:28.693 16:19:32 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:42:28.693 16:19:32 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:42:28.693 16:19:32 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@12 -- # local i 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:28.693 16:19:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:42:28.956 /dev/nbd0 00:42:28.956 16:19:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:28.956 16:19:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:28.956 16:19:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:42:28.956 16:19:33 -- common/autotest_common.sh@857 -- # local i 00:42:28.956 16:19:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:28.956 16:19:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:28.956 16:19:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:42:28.956 16:19:33 -- common/autotest_common.sh@861 -- # break 00:42:28.956 16:19:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:28.956 16:19:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:28.956 16:19:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:28.956 1+0 records in 00:42:28.956 1+0 records out 00:42:28.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384274 s, 10.7 MB/s 00:42:28.956 16:19:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:28.956 16:19:33 -- common/autotest_common.sh@874 -- # size=4096 00:42:28.956 16:19:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:29.214 16:19:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:29.214 16:19:33 -- common/autotest_common.sh@877 -- # return 0 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@678 -- # continue 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:42:29.214 16:19:33 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:29.214 16:19:33 -- bdev/nbd_common.sh@12 -- # local i 00:42:29.215 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:29.215 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:29.215 16:19:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:42:29.215 /dev/nbd1 00:42:29.215 16:19:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:29.473 16:19:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:42:29.473 16:19:33 -- common/autotest_common.sh@857 -- # local i 00:42:29.473 16:19:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:29.473 16:19:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:29.473 16:19:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:42:29.473 16:19:33 -- common/autotest_common.sh@861 -- # break 00:42:29.473 16:19:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:29.473 16:19:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:29.473 16:19:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:29.473 1+0 records in 00:42:29.473 1+0 records out 00:42:29.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418409 s, 9.8 MB/s 00:42:29.473 16:19:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:29.473 16:19:33 -- common/autotest_common.sh@874 -- # size=4096 00:42:29.473 16:19:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:29.473 16:19:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:29.473 16:19:33 -- common/autotest_common.sh@877 -- # return 0 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:29.473 16:19:33 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:42:29.473 16:19:33 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@51 -- # local i 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:29.473 16:19:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@41 -- # break 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@45 -- # return 0 00:42:29.760 16:19:33 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:29.760 16:19:33 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:42:29.760 16:19:33 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@12 -- # local i 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:29.760 16:19:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:42:30.018 /dev/nbd1 00:42:30.018 16:19:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:30.018 16:19:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:30.018 16:19:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:42:30.018 16:19:34 -- common/autotest_common.sh@857 -- # local i 00:42:30.018 16:19:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:30.018 16:19:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:30.018 16:19:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:42:30.018 16:19:34 -- common/autotest_common.sh@861 -- # break 00:42:30.018 16:19:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:30.018 16:19:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:30.018 16:19:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:30.018 1+0 records in 00:42:30.018 1+0 records out 00:42:30.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483742 s, 8.5 MB/s 00:42:30.018 16:19:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:30.018 16:19:34 -- common/autotest_common.sh@874 -- # size=4096 00:42:30.018 16:19:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:30.018 16:19:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:30.018 16:19:34 -- common/autotest_common.sh@877 -- # return 0 00:42:30.018 16:19:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:30.018 16:19:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:30.018 16:19:34 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:42:30.276 16:19:34 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@51 -- # local i 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:30.276 16:19:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@41 -- # break 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@45 -- # return 0 00:42:30.534 16:19:34 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@51 -- # local i 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:30.534 16:19:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@41 -- # break 00:42:30.793 16:19:34 -- bdev/nbd_common.sh@45 -- # return 0 00:42:30.793 16:19:34 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:42:30.793 16:19:34 -- bdev/bdev_raid.sh@709 -- # killprocess 82731 00:42:30.793 16:19:34 -- common/autotest_common.sh@926 -- # '[' -z 82731 ']' 00:42:30.793 16:19:34 -- common/autotest_common.sh@930 -- # kill -0 82731 00:42:30.793 16:19:34 -- common/autotest_common.sh@931 -- # uname 00:42:30.793 16:19:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:30.793 16:19:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82731 00:42:30.793 killing process with pid 82731 00:42:30.793 Received shutdown signal, test time was about 12.681095 seconds 00:42:30.793 00:42:30.793 Latency(us) 00:42:30.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.793 =================================================================================================================== 00:42:30.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:30.793 16:19:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:30.793 16:19:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:30.793 16:19:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82731' 00:42:30.793 16:19:34 -- common/autotest_common.sh@945 -- # kill 82731 00:42:30.793 [2024-07-22 16:19:34.967012] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:30.793 16:19:34 -- common/autotest_common.sh@950 -- # wait 82731 00:42:31.359 [2024-07-22 16:19:35.365269] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:32.733 16:19:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:42:32.733 00:42:32.733 real 0m18.848s 00:42:32.733 user 0m27.361s 00:42:32.734 sys 0m2.943s 00:42:32.734 16:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:32.734 ************************************ 00:42:32.734 END TEST raid_rebuild_test_io 00:42:32.734 ************************************ 00:42:32.734 16:19:36 -- common/autotest_common.sh@10 -- # set +x 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:42:32.734 16:19:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:42:32.734 16:19:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:32.734 16:19:36 -- common/autotest_common.sh@10 -- # set +x 00:42:32.734 ************************************ 00:42:32.734 START TEST raid_rebuild_test_sb_io 00:42:32.734 ************************************ 00:42:32.734 16:19:36 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=83211 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:42:32.734 16:19:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83211 /var/tmp/spdk-raid.sock 00:42:32.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:32.734 16:19:36 -- common/autotest_common.sh@819 -- # '[' -z 83211 ']' 00:42:32.734 16:19:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:32.734 16:19:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:32.734 16:19:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:32.734 16:19:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:32.734 16:19:36 -- common/autotest_common.sh@10 -- # set +x 00:42:32.734 [2024-07-22 16:19:36.925850] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:42:32.734 [2024-07-22 16:19:36.926216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83211 ] 00:42:32.734 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:32.734 Zero copy mechanism will not be used. 00:42:32.992 [2024-07-22 16:19:37.121304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:33.251 [2024-07-22 16:19:37.384605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:33.510 [2024-07-22 16:19:37.610400] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:33.769 16:19:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:33.769 16:19:37 -- common/autotest_common.sh@852 -- # return 0 00:42:33.769 16:19:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:33.769 16:19:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:42:33.769 16:19:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:42:34.027 BaseBdev1_malloc 00:42:34.027 16:19:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:34.027 [2024-07-22 16:19:38.297979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:34.027 [2024-07-22 16:19:38.298165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:34.027 [2024-07-22 16:19:38.298246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:42:34.027 [2024-07-22 16:19:38.298286] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:34.286 [2024-07-22 16:19:38.301161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:34.286 [2024-07-22 16:19:38.301231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:34.286 BaseBdev1 00:42:34.286 16:19:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:34.286 16:19:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:42:34.286 16:19:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:42:34.544 BaseBdev2_malloc 00:42:34.544 16:19:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:42:34.803 [2024-07-22 16:19:38.838563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:42:34.803 [2024-07-22 16:19:38.838692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:34.803 [2024-07-22 16:19:38.838741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:42:34.803 [2024-07-22 16:19:38.838767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:34.803 [2024-07-22 16:19:38.841865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:34.803 [2024-07-22 16:19:38.841924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:42:34.803 BaseBdev2 00:42:34.803 16:19:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:34.803 16:19:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:42:34.803 16:19:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:42:35.061 BaseBdev3_malloc 00:42:35.061 16:19:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:35.319 [2024-07-22 16:19:39.348657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:35.319 [2024-07-22 16:19:39.348784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:35.320 [2024-07-22 16:19:39.348824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:42:35.320 [2024-07-22 16:19:39.348845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:35.320 [2024-07-22 16:19:39.351550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:35.320 [2024-07-22 16:19:39.351600] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:35.320 BaseBdev3 00:42:35.320 16:19:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:42:35.320 16:19:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:42:35.320 16:19:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:42:35.578 BaseBdev4_malloc 00:42:35.578 16:19:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:35.836 [2024-07-22 16:19:39.854340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:35.836 [2024-07-22 16:19:39.854437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:35.836 [2024-07-22 16:19:39.854482] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:42:35.836 [2024-07-22 16:19:39.854504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:35.836 [2024-07-22 16:19:39.857538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:35.836 [2024-07-22 16:19:39.857755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:35.836 BaseBdev4 00:42:35.836 16:19:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:42:36.094 spare_malloc 00:42:36.094 16:19:40 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:42:36.094 spare_delay 00:42:36.353 16:19:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:36.353 [2024-07-22 16:19:40.559696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:36.353 [2024-07-22 16:19:40.559819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:36.353 [2024-07-22 16:19:40.559866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:42:36.353 [2024-07-22 16:19:40.559888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:36.353 [2024-07-22 16:19:40.563030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:36.353 [2024-07-22 16:19:40.563137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:36.353 spare 00:42:36.353 16:19:40 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:42:36.612 [2024-07-22 16:19:40.796166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:36.612 [2024-07-22 16:19:40.799158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:42:36.612 [2024-07-22 16:19:40.799262] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:36.612 [2024-07-22 16:19:40.799350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:36.612 [2024-07-22 16:19:40.799621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:42:36.612 [2024-07-22 16:19:40.799645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:36.612 [2024-07-22 16:19:40.799809] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:42:36.612 [2024-07-22 16:19:40.800518] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:42:36.612 [2024-07-22 16:19:40.800727] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:42:36.612 [2024-07-22 16:19:40.801212] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:36.612 16:19:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:36.870 16:19:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:36.870 "name": "raid_bdev1", 00:42:36.870 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:36.870 "strip_size_kb": 0, 00:42:36.870 "state": "online", 00:42:36.870 "raid_level": "raid1", 00:42:36.870 "superblock": true, 00:42:36.870 "num_base_bdevs": 4, 00:42:36.870 "num_base_bdevs_discovered": 4, 00:42:36.870 "num_base_bdevs_operational": 4, 00:42:36.870 "base_bdevs_list": [ 00:42:36.870 { 00:42:36.870 "name": "BaseBdev1", 00:42:36.870 "uuid": "211ce663-e64e-5a1e-8901-7d939e9700aa", 00:42:36.870 "is_configured": true, 00:42:36.870 "data_offset": 2048, 00:42:36.870 "data_size": 63488 00:42:36.870 }, 00:42:36.870 { 00:42:36.870 "name": "BaseBdev2", 00:42:36.870 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:36.870 "is_configured": true, 00:42:36.870 "data_offset": 2048, 00:42:36.870 "data_size": 63488 00:42:36.870 }, 00:42:36.870 { 00:42:36.870 "name": "BaseBdev3", 00:42:36.870 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:36.870 "is_configured": true, 00:42:36.870 "data_offset": 2048, 00:42:36.870 "data_size": 63488 00:42:36.870 }, 00:42:36.870 { 00:42:36.870 "name": "BaseBdev4", 00:42:36.870 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:36.870 "is_configured": true, 00:42:36.870 "data_offset": 2048, 00:42:36.870 "data_size": 63488 00:42:36.870 } 00:42:36.870 ] 00:42:36.870 }' 00:42:36.870 16:19:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:36.870 16:19:41 -- common/autotest_common.sh@10 -- # set +x 00:42:37.129 16:19:41 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:42:37.129 16:19:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:42:37.387 [2024-07-22 16:19:41.605574] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:42:37.387 16:19:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:42:37.387 16:19:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:37.387 16:19:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:42:37.646 16:19:41 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:42:37.646 16:19:41 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:42:37.646 16:19:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:42:37.646 16:19:41 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:42:37.905 [2024-07-22 16:19:42.018562] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:42:37.905 I/O size of 3145728 is greater than zero copy threshold (65536). 00:42:37.905 Zero copy mechanism will not be used. 00:42:37.905 Running I/O for 60 seconds... 00:42:38.164 [2024-07-22 16:19:42.204776] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:42:38.164 [2024-07-22 16:19:42.212779] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:38.164 16:19:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:38.422 16:19:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:38.423 "name": "raid_bdev1", 00:42:38.423 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:38.423 "strip_size_kb": 0, 00:42:38.423 "state": "online", 00:42:38.423 "raid_level": "raid1", 00:42:38.423 "superblock": true, 00:42:38.423 "num_base_bdevs": 4, 00:42:38.423 "num_base_bdevs_discovered": 3, 00:42:38.423 "num_base_bdevs_operational": 3, 00:42:38.423 "base_bdevs_list": [ 00:42:38.423 { 00:42:38.423 "name": null, 00:42:38.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:38.423 "is_configured": false, 00:42:38.423 "data_offset": 2048, 00:42:38.423 "data_size": 63488 00:42:38.423 }, 00:42:38.423 { 00:42:38.423 "name": "BaseBdev2", 00:42:38.423 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:38.423 "is_configured": true, 00:42:38.423 "data_offset": 2048, 00:42:38.423 "data_size": 63488 00:42:38.423 }, 00:42:38.423 { 00:42:38.423 "name": "BaseBdev3", 00:42:38.423 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:38.423 "is_configured": true, 00:42:38.423 "data_offset": 2048, 00:42:38.423 "data_size": 63488 00:42:38.423 }, 00:42:38.423 { 00:42:38.423 "name": "BaseBdev4", 00:42:38.423 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:38.423 "is_configured": true, 00:42:38.423 "data_offset": 2048, 00:42:38.423 "data_size": 63488 00:42:38.423 } 00:42:38.423 ] 00:42:38.423 }' 00:42:38.423 16:19:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:38.423 16:19:42 -- common/autotest_common.sh@10 -- # set +x 00:42:38.682 16:19:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:38.940 [2024-07-22 16:19:43.135124] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:38.940 [2024-07-22 16:19:43.135206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:38.940 16:19:43 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:42:39.198 [2024-07-22 16:19:43.215638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:42:39.198 [2024-07-22 16:19:43.218511] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:39.198 [2024-07-22 16:19:43.330686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:39.198 [2024-07-22 16:19:43.332605] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:39.456 [2024-07-22 16:19:43.565602] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:39.714 [2024-07-22 16:19:43.908800] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:39.973 16:19:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:40.232 "name": "raid_bdev1", 00:42:40.232 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:40.232 "strip_size_kb": 0, 00:42:40.232 "state": "online", 00:42:40.232 "raid_level": "raid1", 00:42:40.232 "superblock": true, 00:42:40.232 "num_base_bdevs": 4, 00:42:40.232 "num_base_bdevs_discovered": 4, 00:42:40.232 "num_base_bdevs_operational": 4, 00:42:40.232 "process": { 00:42:40.232 "type": "rebuild", 00:42:40.232 "target": "spare", 00:42:40.232 "progress": { 00:42:40.232 "blocks": 16384, 00:42:40.232 "percent": 25 00:42:40.232 } 00:42:40.232 }, 00:42:40.232 "base_bdevs_list": [ 00:42:40.232 { 00:42:40.232 "name": "spare", 00:42:40.232 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:40.232 "is_configured": true, 00:42:40.232 "data_offset": 2048, 00:42:40.232 "data_size": 63488 00:42:40.232 }, 00:42:40.232 { 00:42:40.232 "name": "BaseBdev2", 00:42:40.232 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:40.232 "is_configured": true, 00:42:40.232 "data_offset": 2048, 00:42:40.232 "data_size": 63488 00:42:40.232 }, 00:42:40.232 { 00:42:40.232 "name": "BaseBdev3", 00:42:40.232 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:40.232 "is_configured": true, 00:42:40.232 "data_offset": 2048, 00:42:40.232 "data_size": 63488 00:42:40.232 }, 00:42:40.232 { 00:42:40.232 "name": "BaseBdev4", 00:42:40.232 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:40.232 "is_configured": true, 00:42:40.232 "data_offset": 2048, 00:42:40.232 "data_size": 63488 00:42:40.232 } 00:42:40.232 ] 00:42:40.232 }' 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:40.232 16:19:44 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:42:40.491 [2024-07-22 16:19:44.575184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:42:40.491 [2024-07-22 16:19:44.642105] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:42:40.749 [2024-07-22 16:19:44.822446] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:42:40.749 [2024-07-22 16:19:44.836987] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:40.749 [2024-07-22 16:19:44.873385] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:40.749 16:19:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:40.750 16:19:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:40.750 16:19:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:40.750 16:19:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:41.008 16:19:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:41.008 "name": "raid_bdev1", 00:42:41.008 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:41.008 "strip_size_kb": 0, 00:42:41.008 "state": "online", 00:42:41.008 "raid_level": "raid1", 00:42:41.008 "superblock": true, 00:42:41.008 "num_base_bdevs": 4, 00:42:41.008 "num_base_bdevs_discovered": 3, 00:42:41.008 "num_base_bdevs_operational": 3, 00:42:41.008 "base_bdevs_list": [ 00:42:41.008 { 00:42:41.008 "name": null, 00:42:41.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:41.008 "is_configured": false, 00:42:41.008 "data_offset": 2048, 00:42:41.008 "data_size": 63488 00:42:41.008 }, 00:42:41.008 { 00:42:41.008 "name": "BaseBdev2", 00:42:41.008 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:41.008 "is_configured": true, 00:42:41.008 "data_offset": 2048, 00:42:41.008 "data_size": 63488 00:42:41.008 }, 00:42:41.008 { 00:42:41.008 "name": "BaseBdev3", 00:42:41.008 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:41.008 "is_configured": true, 00:42:41.008 "data_offset": 2048, 00:42:41.008 "data_size": 63488 00:42:41.008 }, 00:42:41.008 { 00:42:41.008 "name": "BaseBdev4", 00:42:41.008 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:41.008 "is_configured": true, 00:42:41.008 "data_offset": 2048, 00:42:41.008 "data_size": 63488 00:42:41.008 } 00:42:41.008 ] 00:42:41.008 }' 00:42:41.008 16:19:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:41.008 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:41.267 16:19:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:41.525 "name": "raid_bdev1", 00:42:41.525 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:41.525 "strip_size_kb": 0, 00:42:41.525 "state": "online", 00:42:41.525 "raid_level": "raid1", 00:42:41.525 "superblock": true, 00:42:41.525 "num_base_bdevs": 4, 00:42:41.525 "num_base_bdevs_discovered": 3, 00:42:41.525 "num_base_bdevs_operational": 3, 00:42:41.525 "base_bdevs_list": [ 00:42:41.525 { 00:42:41.525 "name": null, 00:42:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:41.525 "is_configured": false, 00:42:41.525 "data_offset": 2048, 00:42:41.525 "data_size": 63488 00:42:41.525 }, 00:42:41.525 { 00:42:41.525 "name": "BaseBdev2", 00:42:41.525 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:41.525 "is_configured": true, 00:42:41.525 "data_offset": 2048, 00:42:41.525 "data_size": 63488 00:42:41.525 }, 00:42:41.525 { 00:42:41.525 "name": "BaseBdev3", 00:42:41.525 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:41.525 "is_configured": true, 00:42:41.525 "data_offset": 2048, 00:42:41.525 "data_size": 63488 00:42:41.525 }, 00:42:41.525 { 00:42:41.525 "name": "BaseBdev4", 00:42:41.525 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:41.525 "is_configured": true, 00:42:41.525 "data_offset": 2048, 00:42:41.525 "data_size": 63488 00:42:41.525 } 00:42:41.525 ] 00:42:41.525 }' 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:41.525 16:19:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:42:41.784 [2024-07-22 16:19:45.983967] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:42:41.784 [2024-07-22 16:19:45.984478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:41.784 16:19:46 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:42:41.784 [2024-07-22 16:19:46.053979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:42:41.784 [2024-07-22 16:19:46.056764] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:42:42.042 [2024-07-22 16:19:46.175157] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:42:42.301 [2024-07-22 16:19:46.399783] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:42.301 [2024-07-22 16:19:46.401069] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:42:42.867 [2024-07-22 16:19:46.976414] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:42.867 16:19:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:43.125 [2024-07-22 16:19:47.285604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:42:43.125 16:19:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:43.125 "name": "raid_bdev1", 00:42:43.125 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:43.125 "strip_size_kb": 0, 00:42:43.125 "state": "online", 00:42:43.125 "raid_level": "raid1", 00:42:43.125 "superblock": true, 00:42:43.125 "num_base_bdevs": 4, 00:42:43.125 "num_base_bdevs_discovered": 4, 00:42:43.125 "num_base_bdevs_operational": 4, 00:42:43.125 "process": { 00:42:43.125 "type": "rebuild", 00:42:43.125 "target": "spare", 00:42:43.125 "progress": { 00:42:43.125 "blocks": 12288, 00:42:43.125 "percent": 19 00:42:43.125 } 00:42:43.125 }, 00:42:43.125 "base_bdevs_list": [ 00:42:43.125 { 00:42:43.125 "name": "spare", 00:42:43.125 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:43.125 "is_configured": true, 00:42:43.125 "data_offset": 2048, 00:42:43.125 "data_size": 63488 00:42:43.125 }, 00:42:43.125 { 00:42:43.125 "name": "BaseBdev2", 00:42:43.125 "uuid": "7769d198-ef46-5f7c-949a-fc406ce2e255", 00:42:43.125 "is_configured": true, 00:42:43.125 "data_offset": 2048, 00:42:43.125 "data_size": 63488 00:42:43.125 }, 00:42:43.125 { 00:42:43.125 "name": "BaseBdev3", 00:42:43.125 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:43.125 "is_configured": true, 00:42:43.125 "data_offset": 2048, 00:42:43.125 "data_size": 63488 00:42:43.125 }, 00:42:43.125 { 00:42:43.125 "name": "BaseBdev4", 00:42:43.125 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:43.125 "is_configured": true, 00:42:43.125 "data_offset": 2048, 00:42:43.125 "data_size": 63488 00:42:43.125 } 00:42:43.125 ] 00:42:43.125 }' 00:42:43.125 16:19:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:43.125 16:19:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:43.125 16:19:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:43.125 16:19:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:42:43.126 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:42:43.126 16:19:47 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:42:43.384 [2024-07-22 16:19:47.500659] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:43.384 [2024-07-22 16:19:47.501071] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:42:43.384 [2024-07-22 16:19:47.527769] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:42:43.642 [2024-07-22 16:19:47.840840] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:42:43.642 [2024-07-22 16:19:47.840974] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:43.901 16:19:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:43.901 [2024-07-22 16:19:48.130472] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:42:43.901 [2024-07-22 16:19:48.131402] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:44.159 "name": "raid_bdev1", 00:42:44.159 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:44.159 "strip_size_kb": 0, 00:42:44.159 "state": "online", 00:42:44.159 "raid_level": "raid1", 00:42:44.159 "superblock": true, 00:42:44.159 "num_base_bdevs": 4, 00:42:44.159 "num_base_bdevs_discovered": 3, 00:42:44.159 "num_base_bdevs_operational": 3, 00:42:44.159 "process": { 00:42:44.159 "type": "rebuild", 00:42:44.159 "target": "spare", 00:42:44.159 "progress": { 00:42:44.159 "blocks": 22528, 00:42:44.159 "percent": 35 00:42:44.159 } 00:42:44.159 }, 00:42:44.159 "base_bdevs_list": [ 00:42:44.159 { 00:42:44.159 "name": "spare", 00:42:44.159 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:44.159 "is_configured": true, 00:42:44.159 "data_offset": 2048, 00:42:44.159 "data_size": 63488 00:42:44.159 }, 00:42:44.159 { 00:42:44.159 "name": null, 00:42:44.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:44.159 "is_configured": false, 00:42:44.159 "data_offset": 2048, 00:42:44.159 "data_size": 63488 00:42:44.159 }, 00:42:44.159 { 00:42:44.159 "name": "BaseBdev3", 00:42:44.159 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:44.159 "is_configured": true, 00:42:44.159 "data_offset": 2048, 00:42:44.159 "data_size": 63488 00:42:44.159 }, 00:42:44.159 { 00:42:44.159 "name": "BaseBdev4", 00:42:44.159 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:44.159 "is_configured": true, 00:42:44.159 "data_offset": 2048, 00:42:44.159 "data_size": 63488 00:42:44.159 } 00:42:44.159 ] 00:42:44.159 }' 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@657 -- # local timeout=553 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:44.159 16:19:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:44.418 "name": "raid_bdev1", 00:42:44.418 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:44.418 "strip_size_kb": 0, 00:42:44.418 "state": "online", 00:42:44.418 "raid_level": "raid1", 00:42:44.418 "superblock": true, 00:42:44.418 "num_base_bdevs": 4, 00:42:44.418 "num_base_bdevs_discovered": 3, 00:42:44.418 "num_base_bdevs_operational": 3, 00:42:44.418 "process": { 00:42:44.418 "type": "rebuild", 00:42:44.418 "target": "spare", 00:42:44.418 "progress": { 00:42:44.418 "blocks": 26624, 00:42:44.418 "percent": 41 00:42:44.418 } 00:42:44.418 }, 00:42:44.418 "base_bdevs_list": [ 00:42:44.418 { 00:42:44.418 "name": "spare", 00:42:44.418 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:44.418 "is_configured": true, 00:42:44.418 "data_offset": 2048, 00:42:44.418 "data_size": 63488 00:42:44.418 }, 00:42:44.418 { 00:42:44.418 "name": null, 00:42:44.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:44.418 "is_configured": false, 00:42:44.418 "data_offset": 2048, 00:42:44.418 "data_size": 63488 00:42:44.418 }, 00:42:44.418 { 00:42:44.418 "name": "BaseBdev3", 00:42:44.418 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:44.418 "is_configured": true, 00:42:44.418 "data_offset": 2048, 00:42:44.418 "data_size": 63488 00:42:44.418 }, 00:42:44.418 { 00:42:44.418 "name": "BaseBdev4", 00:42:44.418 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:44.418 "is_configured": true, 00:42:44.418 "data_offset": 2048, 00:42:44.418 "data_size": 63488 00:42:44.418 } 00:42:44.418 ] 00:42:44.418 }' 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:44.418 16:19:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:44.677 [2024-07-22 16:19:48.810312] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:42:45.617 16:19:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:45.618 [2024-07-22 16:19:49.804503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:45.618 "name": "raid_bdev1", 00:42:45.618 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:45.618 "strip_size_kb": 0, 00:42:45.618 "state": "online", 00:42:45.618 "raid_level": "raid1", 00:42:45.618 "superblock": true, 00:42:45.618 "num_base_bdevs": 4, 00:42:45.618 "num_base_bdevs_discovered": 3, 00:42:45.618 "num_base_bdevs_operational": 3, 00:42:45.618 "process": { 00:42:45.618 "type": "rebuild", 00:42:45.618 "target": "spare", 00:42:45.618 "progress": { 00:42:45.618 "blocks": 51200, 00:42:45.618 "percent": 80 00:42:45.618 } 00:42:45.618 }, 00:42:45.618 "base_bdevs_list": [ 00:42:45.618 { 00:42:45.618 "name": "spare", 00:42:45.618 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:45.618 "is_configured": true, 00:42:45.618 "data_offset": 2048, 00:42:45.618 "data_size": 63488 00:42:45.618 }, 00:42:45.618 { 00:42:45.618 "name": null, 00:42:45.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:45.618 "is_configured": false, 00:42:45.618 "data_offset": 2048, 00:42:45.618 "data_size": 63488 00:42:45.618 }, 00:42:45.618 { 00:42:45.618 "name": "BaseBdev3", 00:42:45.618 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:45.618 "is_configured": true, 00:42:45.618 "data_offset": 2048, 00:42:45.618 "data_size": 63488 00:42:45.618 }, 00:42:45.618 { 00:42:45.618 "name": "BaseBdev4", 00:42:45.618 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:45.618 "is_configured": true, 00:42:45.618 "data_offset": 2048, 00:42:45.618 "data_size": 63488 00:42:45.618 } 00:42:45.618 ] 00:42:45.618 }' 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:42:45.618 16:19:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:42:46.187 [2024-07-22 16:19:50.152144] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:42:46.446 [2024-07-22 16:19:50.595915] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:42:46.446 [2024-07-22 16:19:50.694689] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:42:46.446 [2024-07-22 16:19:50.707634] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:46.704 16:19:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:46.962 16:19:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:46.962 "name": "raid_bdev1", 00:42:46.962 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:46.962 "strip_size_kb": 0, 00:42:46.962 "state": "online", 00:42:46.962 "raid_level": "raid1", 00:42:46.962 "superblock": true, 00:42:46.962 "num_base_bdevs": 4, 00:42:46.962 "num_base_bdevs_discovered": 3, 00:42:46.962 "num_base_bdevs_operational": 3, 00:42:46.962 "base_bdevs_list": [ 00:42:46.962 { 00:42:46.962 "name": "spare", 00:42:46.962 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:46.962 "is_configured": true, 00:42:46.962 "data_offset": 2048, 00:42:46.962 "data_size": 63488 00:42:46.962 }, 00:42:46.962 { 00:42:46.962 "name": null, 00:42:46.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:46.962 "is_configured": false, 00:42:46.962 "data_offset": 2048, 00:42:46.962 "data_size": 63488 00:42:46.962 }, 00:42:46.962 { 00:42:46.962 "name": "BaseBdev3", 00:42:46.962 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:46.962 "is_configured": true, 00:42:46.962 "data_offset": 2048, 00:42:46.962 "data_size": 63488 00:42:46.962 }, 00:42:46.962 { 00:42:46.962 "name": "BaseBdev4", 00:42:46.962 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:46.962 "is_configured": true, 00:42:46.962 "data_offset": 2048, 00:42:46.962 "data_size": 63488 00:42:46.962 } 00:42:46.962 ] 00:42:46.962 }' 00:42:46.962 16:19:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:46.962 16:19:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:42:46.962 16:19:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@660 -- # break 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:46.963 16:19:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:47.221 "name": "raid_bdev1", 00:42:47.221 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:47.221 "strip_size_kb": 0, 00:42:47.221 "state": "online", 00:42:47.221 "raid_level": "raid1", 00:42:47.221 "superblock": true, 00:42:47.221 "num_base_bdevs": 4, 00:42:47.221 "num_base_bdevs_discovered": 3, 00:42:47.221 "num_base_bdevs_operational": 3, 00:42:47.221 "base_bdevs_list": [ 00:42:47.221 { 00:42:47.221 "name": "spare", 00:42:47.221 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:47.221 "is_configured": true, 00:42:47.221 "data_offset": 2048, 00:42:47.221 "data_size": 63488 00:42:47.221 }, 00:42:47.221 { 00:42:47.221 "name": null, 00:42:47.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:47.221 "is_configured": false, 00:42:47.221 "data_offset": 2048, 00:42:47.221 "data_size": 63488 00:42:47.221 }, 00:42:47.221 { 00:42:47.221 "name": "BaseBdev3", 00:42:47.221 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:47.221 "is_configured": true, 00:42:47.221 "data_offset": 2048, 00:42:47.221 "data_size": 63488 00:42:47.221 }, 00:42:47.221 { 00:42:47.221 "name": "BaseBdev4", 00:42:47.221 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:47.221 "is_configured": true, 00:42:47.221 "data_offset": 2048, 00:42:47.221 "data_size": 63488 00:42:47.221 } 00:42:47.221 ] 00:42:47.221 }' 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:47.221 16:19:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:47.788 16:19:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:47.788 "name": "raid_bdev1", 00:42:47.788 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:47.788 "strip_size_kb": 0, 00:42:47.788 "state": "online", 00:42:47.788 "raid_level": "raid1", 00:42:47.788 "superblock": true, 00:42:47.788 "num_base_bdevs": 4, 00:42:47.788 "num_base_bdevs_discovered": 3, 00:42:47.788 "num_base_bdevs_operational": 3, 00:42:47.788 "base_bdevs_list": [ 00:42:47.788 { 00:42:47.788 "name": "spare", 00:42:47.788 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:47.788 "is_configured": true, 00:42:47.788 "data_offset": 2048, 00:42:47.788 "data_size": 63488 00:42:47.788 }, 00:42:47.788 { 00:42:47.788 "name": null, 00:42:47.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:47.788 "is_configured": false, 00:42:47.788 "data_offset": 2048, 00:42:47.788 "data_size": 63488 00:42:47.788 }, 00:42:47.788 { 00:42:47.788 "name": "BaseBdev3", 00:42:47.789 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:47.789 "is_configured": true, 00:42:47.789 "data_offset": 2048, 00:42:47.789 "data_size": 63488 00:42:47.789 }, 00:42:47.789 { 00:42:47.789 "name": "BaseBdev4", 00:42:47.789 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:47.789 "is_configured": true, 00:42:47.789 "data_offset": 2048, 00:42:47.789 "data_size": 63488 00:42:47.789 } 00:42:47.789 ] 00:42:47.789 }' 00:42:47.789 16:19:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:47.789 16:19:51 -- common/autotest_common.sh@10 -- # set +x 00:42:48.047 16:19:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:42:48.306 [2024-07-22 16:19:52.365782] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:48.306 [2024-07-22 16:19:52.365837] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:42:48.306 00:42:48.307 Latency(us) 00:42:48.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:48.307 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:42:48.307 raid_bdev1 : 10.37 89.18 267.54 0.00 0.00 16145.55 269.96 128688.87 00:42:48.307 =================================================================================================================== 00:42:48.307 Total : 89.18 267.54 0.00 0.00 16145.55 269.96 128688.87 00:42:48.307 0 00:42:48.307 [2024-07-22 16:19:52.416199] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:48.307 [2024-07-22 16:19:52.416278] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:48.307 [2024-07-22 16:19:52.416428] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:48.307 [2024-07-22 16:19:52.416453] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:42:48.307 16:19:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:48.307 16:19:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:42:48.564 16:19:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:42:48.564 16:19:52 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:42:48.564 16:19:52 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@12 -- # local i 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:48.564 16:19:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:42:48.823 /dev/nbd0 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:48.823 16:19:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:42:48.823 16:19:52 -- common/autotest_common.sh@857 -- # local i 00:42:48.823 16:19:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:48.823 16:19:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:48.823 16:19:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:42:48.823 16:19:52 -- common/autotest_common.sh@861 -- # break 00:42:48.823 16:19:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:48.823 16:19:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:48.823 16:19:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:48.823 1+0 records in 00:42:48.823 1+0 records out 00:42:48.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273823 s, 15.0 MB/s 00:42:48.823 16:19:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:48.823 16:19:52 -- common/autotest_common.sh@874 -- # size=4096 00:42:48.823 16:19:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:48.823 16:19:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:48.823 16:19:52 -- common/autotest_common.sh@877 -- # return 0 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@678 -- # continue 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:42:48.823 16:19:52 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@12 -- # local i 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:48.823 16:19:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:42:49.081 /dev/nbd1 00:42:49.081 16:19:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:49.081 16:19:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:49.081 16:19:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:42:49.081 16:19:53 -- common/autotest_common.sh@857 -- # local i 00:42:49.081 16:19:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:49.081 16:19:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:49.081 16:19:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:42:49.081 16:19:53 -- common/autotest_common.sh@861 -- # break 00:42:49.081 16:19:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:49.081 16:19:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:49.081 16:19:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:49.081 1+0 records in 00:42:49.081 1+0 records out 00:42:49.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581658 s, 7.0 MB/s 00:42:49.081 16:19:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:49.081 16:19:53 -- common/autotest_common.sh@874 -- # size=4096 00:42:49.081 16:19:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:49.081 16:19:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:49.081 16:19:53 -- common/autotest_common.sh@877 -- # return 0 00:42:49.081 16:19:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:49.081 16:19:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:49.081 16:19:53 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:49.346 16:19:53 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@51 -- # local i 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:49.346 16:19:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@41 -- # break 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@45 -- # return 0 00:42:49.608 16:19:53 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:42:49.608 16:19:53 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:42:49.608 16:19:53 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@12 -- # local i 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:49.608 16:19:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:42:49.867 /dev/nbd1 00:42:49.867 16:19:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:49.867 16:19:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:49.867 16:19:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:42:49.867 16:19:54 -- common/autotest_common.sh@857 -- # local i 00:42:49.867 16:19:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:42:49.867 16:19:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:42:49.867 16:19:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:42:49.868 16:19:54 -- common/autotest_common.sh@861 -- # break 00:42:49.868 16:19:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:42:49.868 16:19:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:42:49.868 16:19:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:49.868 1+0 records in 00:42:49.868 1+0 records out 00:42:49.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445413 s, 9.2 MB/s 00:42:49.868 16:19:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:49.868 16:19:54 -- common/autotest_common.sh@874 -- # size=4096 00:42:49.868 16:19:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:49.868 16:19:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:42:49.868 16:19:54 -- common/autotest_common.sh@877 -- # return 0 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:49.868 16:19:54 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:42:49.868 16:19:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@51 -- # local i 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:49.868 16:19:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@41 -- # break 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@45 -- # return 0 00:42:50.164 16:19:54 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@51 -- # local i 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:50.164 16:19:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@41 -- # break 00:42:50.422 16:19:54 -- bdev/nbd_common.sh@45 -- # return 0 00:42:50.422 16:19:54 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:42:50.422 16:19:54 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:50.422 16:19:54 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:42:50.422 16:19:54 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:42:50.680 16:19:54 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:42:50.939 [2024-07-22 16:19:55.110819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:42:50.939 [2024-07-22 16:19:55.111200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:50.939 [2024-07-22 16:19:55.111250] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:42:50.939 [2024-07-22 16:19:55.111276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:50.939 [2024-07-22 16:19:55.114553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:50.939 [2024-07-22 16:19:55.114779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:42:50.939 [2024-07-22 16:19:55.114917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:42:50.939 [2024-07-22 16:19:55.115019] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:50.939 BaseBdev1 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@696 -- # continue 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:42:50.939 16:19:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:42:51.197 16:19:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:42:51.456 [2024-07-22 16:19:55.543231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:42:51.456 [2024-07-22 16:19:55.543353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:51.456 [2024-07-22 16:19:55.543395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:42:51.456 [2024-07-22 16:19:55.543414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:51.456 [2024-07-22 16:19:55.543975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:51.456 [2024-07-22 16:19:55.544056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:42:51.456 [2024-07-22 16:19:55.544194] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:42:51.456 [2024-07-22 16:19:55.544226] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:42:51.456 [2024-07-22 16:19:55.544239] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:42:51.456 [2024-07-22 16:19:55.544270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state configuring 00:42:51.456 [2024-07-22 16:19:55.544358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:42:51.456 BaseBdev3 00:42:51.456 16:19:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:42:51.456 16:19:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:42:51.456 16:19:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:42:51.714 16:19:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:42:51.973 [2024-07-22 16:19:56.043425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:42:51.973 [2024-07-22 16:19:56.043549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:51.973 [2024-07-22 16:19:56.043643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:42:51.973 [2024-07-22 16:19:56.043660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:51.973 [2024-07-22 16:19:56.044551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:51.973 [2024-07-22 16:19:56.044702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:42:51.973 [2024-07-22 16:19:56.044931] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:42:51.973 [2024-07-22 16:19:56.045099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:42:51.973 BaseBdev4 00:42:51.973 16:19:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:42:52.232 16:19:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:42:52.492 [2024-07-22 16:19:56.595792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:42:52.492 [2024-07-22 16:19:56.596447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:42:52.492 [2024-07-22 16:19:56.596520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:42:52.492 [2024-07-22 16:19:56.596539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:42:52.492 [2024-07-22 16:19:56.597278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:42:52.492 [2024-07-22 16:19:56.597305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:42:52.492 [2024-07-22 16:19:56.597454] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:42:52.492 [2024-07-22 16:19:56.597499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:42:52.492 spare 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:52.492 16:19:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:52.492 [2024-07-22 16:19:56.697699] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c380 00:42:52.492 [2024-07-22 16:19:56.698186] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:42:52.492 [2024-07-22 16:19:56.698472] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036870 00:42:52.492 [2024-07-22 16:19:56.699086] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c380 00:42:52.492 [2024-07-22 16:19:56.699115] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c380 00:42:52.492 [2024-07-22 16:19:56.699351] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:42:52.751 16:19:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:52.751 "name": "raid_bdev1", 00:42:52.751 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:52.751 "strip_size_kb": 0, 00:42:52.751 "state": "online", 00:42:52.751 "raid_level": "raid1", 00:42:52.751 "superblock": true, 00:42:52.751 "num_base_bdevs": 4, 00:42:52.751 "num_base_bdevs_discovered": 3, 00:42:52.751 "num_base_bdevs_operational": 3, 00:42:52.751 "base_bdevs_list": [ 00:42:52.751 { 00:42:52.751 "name": "spare", 00:42:52.751 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:52.751 "is_configured": true, 00:42:52.751 "data_offset": 2048, 00:42:52.751 "data_size": 63488 00:42:52.751 }, 00:42:52.751 { 00:42:52.751 "name": null, 00:42:52.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:52.751 "is_configured": false, 00:42:52.751 "data_offset": 2048, 00:42:52.751 "data_size": 63488 00:42:52.751 }, 00:42:52.751 { 00:42:52.751 "name": "BaseBdev3", 00:42:52.751 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:52.751 "is_configured": true, 00:42:52.751 "data_offset": 2048, 00:42:52.751 "data_size": 63488 00:42:52.751 }, 00:42:52.751 { 00:42:52.751 "name": "BaseBdev4", 00:42:52.751 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:52.751 "is_configured": true, 00:42:52.751 "data_offset": 2048, 00:42:52.751 "data_size": 63488 00:42:52.751 } 00:42:52.751 ] 00:42:52.751 }' 00:42:52.751 16:19:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:52.751 16:19:56 -- common/autotest_common.sh@10 -- # set +x 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:42:53.319 "name": "raid_bdev1", 00:42:53.319 "uuid": "14fede08-a962-4e65-9946-7ebdeaf2088b", 00:42:53.319 "strip_size_kb": 0, 00:42:53.319 "state": "online", 00:42:53.319 "raid_level": "raid1", 00:42:53.319 "superblock": true, 00:42:53.319 "num_base_bdevs": 4, 00:42:53.319 "num_base_bdevs_discovered": 3, 00:42:53.319 "num_base_bdevs_operational": 3, 00:42:53.319 "base_bdevs_list": [ 00:42:53.319 { 00:42:53.319 "name": "spare", 00:42:53.319 "uuid": "063d6009-90f7-562a-9527-9d08b200b4a8", 00:42:53.319 "is_configured": true, 00:42:53.319 "data_offset": 2048, 00:42:53.319 "data_size": 63488 00:42:53.319 }, 00:42:53.319 { 00:42:53.319 "name": null, 00:42:53.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:53.319 "is_configured": false, 00:42:53.319 "data_offset": 2048, 00:42:53.319 "data_size": 63488 00:42:53.319 }, 00:42:53.319 { 00:42:53.319 "name": "BaseBdev3", 00:42:53.319 "uuid": "fea3c260-931a-51d4-938a-f8a1af95f4fb", 00:42:53.319 "is_configured": true, 00:42:53.319 "data_offset": 2048, 00:42:53.319 "data_size": 63488 00:42:53.319 }, 00:42:53.319 { 00:42:53.319 "name": "BaseBdev4", 00:42:53.319 "uuid": "329b09d0-9bd9-55c1-937b-166944732e5e", 00:42:53.319 "is_configured": true, 00:42:53.319 "data_offset": 2048, 00:42:53.319 "data_size": 63488 00:42:53.319 } 00:42:53.319 ] 00:42:53.319 }' 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:53.319 16:19:57 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:42:53.885 16:19:57 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:42:53.885 16:19:57 -- bdev/bdev_raid.sh@709 -- # killprocess 83211 00:42:53.885 16:19:57 -- common/autotest_common.sh@926 -- # '[' -z 83211 ']' 00:42:53.885 16:19:57 -- common/autotest_common.sh@930 -- # kill -0 83211 00:42:53.885 16:19:57 -- common/autotest_common.sh@931 -- # uname 00:42:53.885 16:19:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:42:53.885 16:19:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83211 00:42:53.885 16:19:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:42:53.885 killing process with pid 83211 00:42:53.885 Received shutdown signal, test time was about 15.876497 seconds 00:42:53.885 00:42:53.885 Latency(us) 00:42:53.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.885 =================================================================================================================== 00:42:53.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:53.885 16:19:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:42:53.885 16:19:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83211' 00:42:53.885 16:19:57 -- common/autotest_common.sh@945 -- # kill 83211 00:42:53.885 [2024-07-22 16:19:57.898011] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:42:53.885 16:19:57 -- common/autotest_common.sh@950 -- # wait 83211 00:42:53.885 [2024-07-22 16:19:57.898136] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:42:53.885 [2024-07-22 16:19:57.898251] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:42:53.885 [2024-07-22 16:19:57.898272] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c380 name raid_bdev1, state offline 00:42:54.144 [2024-07-22 16:19:58.309222] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@711 -- # return 0 00:42:55.520 00:42:55.520 real 0m22.798s 00:42:55.520 user 0m34.403s 00:42:55.520 sys 0m3.523s 00:42:55.520 16:19:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:55.520 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:42:55.520 ************************************ 00:42:55.520 END TEST raid_rebuild_test_sb_io 00:42:55.520 ************************************ 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:42:55.520 16:19:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:42:55.520 16:19:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:42:55.520 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:42:55.520 ************************************ 00:42:55.520 START TEST raid5f_state_function_test 00:42:55.520 ************************************ 00:42:55.520 16:19:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=83780 00:42:55.520 Process raid pid: 83780 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 83780' 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 83780 /var/tmp/spdk-raid.sock 00:42:55.520 16:19:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:42:55.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:42:55.520 16:19:59 -- common/autotest_common.sh@819 -- # '[' -z 83780 ']' 00:42:55.520 16:19:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:42:55.520 16:19:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:42:55.520 16:19:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:42:55.520 16:19:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:42:55.520 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:42:55.520 [2024-07-22 16:19:59.760737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:42:55.520 [2024-07-22 16:19:59.761227] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:55.779 [2024-07-22 16:19:59.939982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:56.038 [2024-07-22 16:20:00.196727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:42:56.296 [2024-07-22 16:20:00.415748] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:42:56.580 16:20:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:42:56.580 16:20:00 -- common/autotest_common.sh@852 -- # return 0 00:42:56.580 16:20:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:42:56.838 [2024-07-22 16:20:00.990413] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:56.838 [2024-07-22 16:20:00.990478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:56.838 [2024-07-22 16:20:00.990501] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:56.838 [2024-07-22 16:20:00.990526] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:56.838 [2024-07-22 16:20:00.990535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:56.838 [2024-07-22 16:20:00.990549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:56.838 16:20:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:57.096 16:20:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:57.096 "name": "Existed_Raid", 00:42:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.096 "strip_size_kb": 64, 00:42:57.096 "state": "configuring", 00:42:57.096 "raid_level": "raid5f", 00:42:57.096 "superblock": false, 00:42:57.096 "num_base_bdevs": 3, 00:42:57.096 "num_base_bdevs_discovered": 0, 00:42:57.096 "num_base_bdevs_operational": 3, 00:42:57.096 "base_bdevs_list": [ 00:42:57.096 { 00:42:57.096 "name": "BaseBdev1", 00:42:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.096 "is_configured": false, 00:42:57.096 "data_offset": 0, 00:42:57.096 "data_size": 0 00:42:57.096 }, 00:42:57.096 { 00:42:57.096 "name": "BaseBdev2", 00:42:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.096 "is_configured": false, 00:42:57.096 "data_offset": 0, 00:42:57.096 "data_size": 0 00:42:57.096 }, 00:42:57.096 { 00:42:57.096 "name": "BaseBdev3", 00:42:57.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:57.096 "is_configured": false, 00:42:57.096 "data_offset": 0, 00:42:57.096 "data_size": 0 00:42:57.096 } 00:42:57.096 ] 00:42:57.096 }' 00:42:57.096 16:20:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:57.096 16:20:01 -- common/autotest_common.sh@10 -- # set +x 00:42:57.663 16:20:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:42:57.663 [2024-07-22 16:20:01.910659] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:57.663 [2024-07-22 16:20:01.910731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:42:57.663 16:20:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:42:58.230 [2024-07-22 16:20:02.206793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:42:58.230 [2024-07-22 16:20:02.206894] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:42:58.230 [2024-07-22 16:20:02.206908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:58.230 [2024-07-22 16:20:02.206937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:58.230 [2024-07-22 16:20:02.206947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:58.230 [2024-07-22 16:20:02.206962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:58.230 16:20:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:42:58.230 [2024-07-22 16:20:02.488695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:58.230 BaseBdev1 00:42:58.489 16:20:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:42:58.489 16:20:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:42:58.489 16:20:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:42:58.489 16:20:02 -- common/autotest_common.sh@889 -- # local i 00:42:58.489 16:20:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:42:58.489 16:20:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:42:58.489 16:20:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:42:58.489 16:20:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:42:58.748 [ 00:42:58.748 { 00:42:58.748 "name": "BaseBdev1", 00:42:58.748 "aliases": [ 00:42:58.748 "400a66b1-8cc6-4af9-828a-b1717ee88b8e" 00:42:58.748 ], 00:42:58.748 "product_name": "Malloc disk", 00:42:58.748 "block_size": 512, 00:42:58.748 "num_blocks": 65536, 00:42:58.748 "uuid": "400a66b1-8cc6-4af9-828a-b1717ee88b8e", 00:42:58.748 "assigned_rate_limits": { 00:42:58.748 "rw_ios_per_sec": 0, 00:42:58.748 "rw_mbytes_per_sec": 0, 00:42:58.748 "r_mbytes_per_sec": 0, 00:42:58.748 "w_mbytes_per_sec": 0 00:42:58.748 }, 00:42:58.748 "claimed": true, 00:42:58.748 "claim_type": "exclusive_write", 00:42:58.748 "zoned": false, 00:42:58.748 "supported_io_types": { 00:42:58.748 "read": true, 00:42:58.748 "write": true, 00:42:58.748 "unmap": true, 00:42:58.748 "write_zeroes": true, 00:42:58.748 "flush": true, 00:42:58.748 "reset": true, 00:42:58.748 "compare": false, 00:42:58.748 "compare_and_write": false, 00:42:58.748 "abort": true, 00:42:58.748 "nvme_admin": false, 00:42:58.748 "nvme_io": false 00:42:58.748 }, 00:42:58.748 "memory_domains": [ 00:42:58.748 { 00:42:58.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:58.748 "dma_device_type": 2 00:42:58.748 } 00:42:58.748 ], 00:42:58.748 "driver_specific": {} 00:42:58.748 } 00:42:58.748 ] 00:42:58.748 16:20:02 -- common/autotest_common.sh@895 -- # return 0 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:58.748 16:20:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:42:59.006 16:20:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:42:59.006 "name": "Existed_Raid", 00:42:59.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.006 "strip_size_kb": 64, 00:42:59.006 "state": "configuring", 00:42:59.006 "raid_level": "raid5f", 00:42:59.006 "superblock": false, 00:42:59.006 "num_base_bdevs": 3, 00:42:59.006 "num_base_bdevs_discovered": 1, 00:42:59.006 "num_base_bdevs_operational": 3, 00:42:59.006 "base_bdevs_list": [ 00:42:59.006 { 00:42:59.006 "name": "BaseBdev1", 00:42:59.006 "uuid": "400a66b1-8cc6-4af9-828a-b1717ee88b8e", 00:42:59.006 "is_configured": true, 00:42:59.006 "data_offset": 0, 00:42:59.006 "data_size": 65536 00:42:59.006 }, 00:42:59.006 { 00:42:59.006 "name": "BaseBdev2", 00:42:59.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.006 "is_configured": false, 00:42:59.006 "data_offset": 0, 00:42:59.006 "data_size": 0 00:42:59.007 }, 00:42:59.007 { 00:42:59.007 "name": "BaseBdev3", 00:42:59.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:42:59.007 "is_configured": false, 00:42:59.007 "data_offset": 0, 00:42:59.007 "data_size": 0 00:42:59.007 } 00:42:59.007 ] 00:42:59.007 }' 00:42:59.007 16:20:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:42:59.007 16:20:03 -- common/autotest_common.sh@10 -- # set +x 00:42:59.572 16:20:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:42:59.572 [2024-07-22 16:20:03.773186] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:42:59.572 [2024-07-22 16:20:03.773265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:42:59.572 16:20:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:42:59.572 16:20:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:42:59.830 [2024-07-22 16:20:04.049367] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:42:59.830 [2024-07-22 16:20:04.051844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:42:59.830 [2024-07-22 16:20:04.051900] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:42:59.830 [2024-07-22 16:20:04.051915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:42:59.830 [2024-07-22 16:20:04.051931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:42:59.830 16:20:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:00.089 16:20:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:00.089 "name": "Existed_Raid", 00:43:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.089 "strip_size_kb": 64, 00:43:00.089 "state": "configuring", 00:43:00.089 "raid_level": "raid5f", 00:43:00.089 "superblock": false, 00:43:00.089 "num_base_bdevs": 3, 00:43:00.089 "num_base_bdevs_discovered": 1, 00:43:00.089 "num_base_bdevs_operational": 3, 00:43:00.089 "base_bdevs_list": [ 00:43:00.089 { 00:43:00.089 "name": "BaseBdev1", 00:43:00.089 "uuid": "400a66b1-8cc6-4af9-828a-b1717ee88b8e", 00:43:00.089 "is_configured": true, 00:43:00.089 "data_offset": 0, 00:43:00.089 "data_size": 65536 00:43:00.089 }, 00:43:00.089 { 00:43:00.089 "name": "BaseBdev2", 00:43:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.089 "is_configured": false, 00:43:00.089 "data_offset": 0, 00:43:00.089 "data_size": 0 00:43:00.089 }, 00:43:00.089 { 00:43:00.089 "name": "BaseBdev3", 00:43:00.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:00.089 "is_configured": false, 00:43:00.089 "data_offset": 0, 00:43:00.089 "data_size": 0 00:43:00.089 } 00:43:00.089 ] 00:43:00.089 }' 00:43:00.089 16:20:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:00.089 16:20:04 -- common/autotest_common.sh@10 -- # set +x 00:43:00.348 16:20:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:43:00.915 [2024-07-22 16:20:04.930011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:00.915 BaseBdev2 00:43:00.915 16:20:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:43:00.915 16:20:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:43:00.915 16:20:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:00.915 16:20:04 -- common/autotest_common.sh@889 -- # local i 00:43:00.915 16:20:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:00.915 16:20:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:00.915 16:20:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:01.173 16:20:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:01.173 [ 00:43:01.173 { 00:43:01.173 "name": "BaseBdev2", 00:43:01.173 "aliases": [ 00:43:01.173 "94bd491c-c5dc-4482-a567-706af6d2d032" 00:43:01.173 ], 00:43:01.173 "product_name": "Malloc disk", 00:43:01.173 "block_size": 512, 00:43:01.173 "num_blocks": 65536, 00:43:01.173 "uuid": "94bd491c-c5dc-4482-a567-706af6d2d032", 00:43:01.173 "assigned_rate_limits": { 00:43:01.173 "rw_ios_per_sec": 0, 00:43:01.173 "rw_mbytes_per_sec": 0, 00:43:01.173 "r_mbytes_per_sec": 0, 00:43:01.173 "w_mbytes_per_sec": 0 00:43:01.173 }, 00:43:01.173 "claimed": true, 00:43:01.173 "claim_type": "exclusive_write", 00:43:01.173 "zoned": false, 00:43:01.173 "supported_io_types": { 00:43:01.173 "read": true, 00:43:01.173 "write": true, 00:43:01.173 "unmap": true, 00:43:01.173 "write_zeroes": true, 00:43:01.173 "flush": true, 00:43:01.173 "reset": true, 00:43:01.173 "compare": false, 00:43:01.173 "compare_and_write": false, 00:43:01.173 "abort": true, 00:43:01.173 "nvme_admin": false, 00:43:01.173 "nvme_io": false 00:43:01.173 }, 00:43:01.173 "memory_domains": [ 00:43:01.173 { 00:43:01.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:01.173 "dma_device_type": 2 00:43:01.173 } 00:43:01.173 ], 00:43:01.173 "driver_specific": {} 00:43:01.173 } 00:43:01.173 ] 00:43:01.431 16:20:05 -- common/autotest_common.sh@895 -- # return 0 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:01.431 16:20:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:01.689 16:20:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:01.689 "name": "Existed_Raid", 00:43:01.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:01.689 "strip_size_kb": 64, 00:43:01.689 "state": "configuring", 00:43:01.689 "raid_level": "raid5f", 00:43:01.689 "superblock": false, 00:43:01.689 "num_base_bdevs": 3, 00:43:01.689 "num_base_bdevs_discovered": 2, 00:43:01.689 "num_base_bdevs_operational": 3, 00:43:01.689 "base_bdevs_list": [ 00:43:01.689 { 00:43:01.689 "name": "BaseBdev1", 00:43:01.689 "uuid": "400a66b1-8cc6-4af9-828a-b1717ee88b8e", 00:43:01.689 "is_configured": true, 00:43:01.689 "data_offset": 0, 00:43:01.689 "data_size": 65536 00:43:01.689 }, 00:43:01.689 { 00:43:01.689 "name": "BaseBdev2", 00:43:01.689 "uuid": "94bd491c-c5dc-4482-a567-706af6d2d032", 00:43:01.689 "is_configured": true, 00:43:01.689 "data_offset": 0, 00:43:01.689 "data_size": 65536 00:43:01.689 }, 00:43:01.689 { 00:43:01.689 "name": "BaseBdev3", 00:43:01.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:01.689 "is_configured": false, 00:43:01.689 "data_offset": 0, 00:43:01.689 "data_size": 0 00:43:01.689 } 00:43:01.689 ] 00:43:01.689 }' 00:43:01.689 16:20:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:01.689 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:43:01.947 16:20:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:43:02.206 [2024-07-22 16:20:06.377510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:02.206 [2024-07-22 16:20:06.377597] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:43:02.206 [2024-07-22 16:20:06.377614] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:43:02.206 [2024-07-22 16:20:06.377739] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:43:02.206 [2024-07-22 16:20:06.383435] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:43:02.206 [2024-07-22 16:20:06.383463] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:43:02.206 [2024-07-22 16:20:06.383836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:02.206 BaseBdev3 00:43:02.206 16:20:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:43:02.206 16:20:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:43:02.206 16:20:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:02.206 16:20:06 -- common/autotest_common.sh@889 -- # local i 00:43:02.206 16:20:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:02.206 16:20:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:02.206 16:20:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:02.464 16:20:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:02.723 [ 00:43:02.723 { 00:43:02.723 "name": "BaseBdev3", 00:43:02.723 "aliases": [ 00:43:02.723 "2ebef202-41c9-4bc2-82ae-ebaa80b49783" 00:43:02.723 ], 00:43:02.723 "product_name": "Malloc disk", 00:43:02.723 "block_size": 512, 00:43:02.723 "num_blocks": 65536, 00:43:02.723 "uuid": "2ebef202-41c9-4bc2-82ae-ebaa80b49783", 00:43:02.723 "assigned_rate_limits": { 00:43:02.723 "rw_ios_per_sec": 0, 00:43:02.723 "rw_mbytes_per_sec": 0, 00:43:02.723 "r_mbytes_per_sec": 0, 00:43:02.723 "w_mbytes_per_sec": 0 00:43:02.723 }, 00:43:02.723 "claimed": true, 00:43:02.723 "claim_type": "exclusive_write", 00:43:02.723 "zoned": false, 00:43:02.723 "supported_io_types": { 00:43:02.723 "read": true, 00:43:02.723 "write": true, 00:43:02.723 "unmap": true, 00:43:02.723 "write_zeroes": true, 00:43:02.723 "flush": true, 00:43:02.723 "reset": true, 00:43:02.723 "compare": false, 00:43:02.723 "compare_and_write": false, 00:43:02.723 "abort": true, 00:43:02.723 "nvme_admin": false, 00:43:02.723 "nvme_io": false 00:43:02.723 }, 00:43:02.723 "memory_domains": [ 00:43:02.723 { 00:43:02.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:02.723 "dma_device_type": 2 00:43:02.723 } 00:43:02.723 ], 00:43:02.723 "driver_specific": {} 00:43:02.723 } 00:43:02.723 ] 00:43:02.723 16:20:06 -- common/autotest_common.sh@895 -- # return 0 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:02.723 16:20:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:02.981 16:20:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:02.981 "name": "Existed_Raid", 00:43:02.981 "uuid": "459f2f47-f88b-4267-a586-71af81b02550", 00:43:02.981 "strip_size_kb": 64, 00:43:02.981 "state": "online", 00:43:02.981 "raid_level": "raid5f", 00:43:02.981 "superblock": false, 00:43:02.981 "num_base_bdevs": 3, 00:43:02.981 "num_base_bdevs_discovered": 3, 00:43:02.981 "num_base_bdevs_operational": 3, 00:43:02.981 "base_bdevs_list": [ 00:43:02.981 { 00:43:02.981 "name": "BaseBdev1", 00:43:02.981 "uuid": "400a66b1-8cc6-4af9-828a-b1717ee88b8e", 00:43:02.981 "is_configured": true, 00:43:02.981 "data_offset": 0, 00:43:02.981 "data_size": 65536 00:43:02.981 }, 00:43:02.981 { 00:43:02.981 "name": "BaseBdev2", 00:43:02.981 "uuid": "94bd491c-c5dc-4482-a567-706af6d2d032", 00:43:02.981 "is_configured": true, 00:43:02.981 "data_offset": 0, 00:43:02.981 "data_size": 65536 00:43:02.981 }, 00:43:02.981 { 00:43:02.981 "name": "BaseBdev3", 00:43:02.981 "uuid": "2ebef202-41c9-4bc2-82ae-ebaa80b49783", 00:43:02.981 "is_configured": true, 00:43:02.981 "data_offset": 0, 00:43:02.981 "data_size": 65536 00:43:02.981 } 00:43:02.981 ] 00:43:02.981 }' 00:43:02.981 16:20:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:02.981 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:43:03.548 16:20:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:43:03.548 [2024-07-22 16:20:07.766380] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:03.807 16:20:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:04.064 16:20:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:04.064 "name": "Existed_Raid", 00:43:04.064 "uuid": "459f2f47-f88b-4267-a586-71af81b02550", 00:43:04.064 "strip_size_kb": 64, 00:43:04.064 "state": "online", 00:43:04.064 "raid_level": "raid5f", 00:43:04.064 "superblock": false, 00:43:04.064 "num_base_bdevs": 3, 00:43:04.064 "num_base_bdevs_discovered": 2, 00:43:04.064 "num_base_bdevs_operational": 2, 00:43:04.064 "base_bdevs_list": [ 00:43:04.064 { 00:43:04.064 "name": null, 00:43:04.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:04.064 "is_configured": false, 00:43:04.064 "data_offset": 0, 00:43:04.064 "data_size": 65536 00:43:04.064 }, 00:43:04.064 { 00:43:04.064 "name": "BaseBdev2", 00:43:04.064 "uuid": "94bd491c-c5dc-4482-a567-706af6d2d032", 00:43:04.064 "is_configured": true, 00:43:04.064 "data_offset": 0, 00:43:04.064 "data_size": 65536 00:43:04.064 }, 00:43:04.064 { 00:43:04.064 "name": "BaseBdev3", 00:43:04.064 "uuid": "2ebef202-41c9-4bc2-82ae-ebaa80b49783", 00:43:04.064 "is_configured": true, 00:43:04.064 "data_offset": 0, 00:43:04.064 "data_size": 65536 00:43:04.064 } 00:43:04.064 ] 00:43:04.064 }' 00:43:04.064 16:20:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:04.064 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:43:04.322 16:20:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:43:04.322 16:20:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:04.322 16:20:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:04.322 16:20:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:43:04.580 16:20:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:43:04.580 16:20:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:04.580 16:20:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:43:04.839 [2024-07-22 16:20:08.971864] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:04.839 [2024-07-22 16:20:08.971937] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:04.839 [2024-07-22 16:20:08.972000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:04.839 16:20:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:43:04.839 16:20:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:04.839 16:20:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:43:04.839 16:20:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:05.405 16:20:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:43:05.405 16:20:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:05.405 16:20:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:43:05.405 [2024-07-22 16:20:09.597005] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:05.405 [2024-07-22 16:20:09.597117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:43:05.662 16:20:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:43:05.663 16:20:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:05.663 16:20:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:05.663 16:20:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:43:05.921 16:20:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:43:05.921 16:20:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:43:05.921 16:20:10 -- bdev/bdev_raid.sh@287 -- # killprocess 83780 00:43:05.921 16:20:10 -- common/autotest_common.sh@926 -- # '[' -z 83780 ']' 00:43:05.921 16:20:10 -- common/autotest_common.sh@930 -- # kill -0 83780 00:43:05.921 16:20:10 -- common/autotest_common.sh@931 -- # uname 00:43:05.921 16:20:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:05.921 16:20:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83780 00:43:05.921 killing process with pid 83780 00:43:05.921 16:20:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:05.921 16:20:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:05.921 16:20:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83780' 00:43:05.921 16:20:10 -- common/autotest_common.sh@945 -- # kill 83780 00:43:05.921 [2024-07-22 16:20:10.055036] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:05.921 16:20:10 -- common/autotest_common.sh@950 -- # wait 83780 00:43:05.921 [2024-07-22 16:20:10.055167] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:07.388 ************************************ 00:43:07.388 END TEST raid5f_state_function_test 00:43:07.388 ************************************ 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:43:07.388 00:43:07.388 real 0m11.684s 00:43:07.388 user 0m18.961s 00:43:07.388 sys 0m2.088s 00:43:07.388 16:20:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:07.388 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:43:07.388 16:20:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:43:07.388 16:20:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:07.388 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:43:07.388 ************************************ 00:43:07.388 START TEST raid5f_state_function_test_sb 00:43:07.388 ************************************ 00:43:07.388 16:20:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:43:07.388 Process raid pid: 84139 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=84139 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84139' 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84139 /var/tmp/spdk-raid.sock 00:43:07.388 16:20:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:43:07.388 16:20:11 -- common/autotest_common.sh@819 -- # '[' -z 84139 ']' 00:43:07.388 16:20:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:43:07.388 16:20:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:07.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:43:07.388 16:20:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:43:07.388 16:20:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:07.388 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:43:07.388 [2024-07-22 16:20:11.503906] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:43:07.388 [2024-07-22 16:20:11.504140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:07.646 [2024-07-22 16:20:11.682992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.905 [2024-07-22 16:20:11.958433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:07.905 [2024-07-22 16:20:12.176916] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:08.471 16:20:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:08.471 16:20:12 -- common/autotest_common.sh@852 -- # return 0 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:43:08.471 [2024-07-22 16:20:12.633135] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:08.471 [2024-07-22 16:20:12.633209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:08.471 [2024-07-22 16:20:12.633226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:08.471 [2024-07-22 16:20:12.633242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:08.471 [2024-07-22 16:20:12.633253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:08.471 [2024-07-22 16:20:12.633268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:08.471 16:20:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:08.729 16:20:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:08.729 "name": "Existed_Raid", 00:43:08.729 "uuid": "b2b9a517-9381-4dbd-b1a9-10bb446914b1", 00:43:08.729 "strip_size_kb": 64, 00:43:08.730 "state": "configuring", 00:43:08.730 "raid_level": "raid5f", 00:43:08.730 "superblock": true, 00:43:08.730 "num_base_bdevs": 3, 00:43:08.730 "num_base_bdevs_discovered": 0, 00:43:08.730 "num_base_bdevs_operational": 3, 00:43:08.730 "base_bdevs_list": [ 00:43:08.730 { 00:43:08.730 "name": "BaseBdev1", 00:43:08.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.730 "is_configured": false, 00:43:08.730 "data_offset": 0, 00:43:08.730 "data_size": 0 00:43:08.730 }, 00:43:08.730 { 00:43:08.730 "name": "BaseBdev2", 00:43:08.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.730 "is_configured": false, 00:43:08.730 "data_offset": 0, 00:43:08.730 "data_size": 0 00:43:08.730 }, 00:43:08.730 { 00:43:08.730 "name": "BaseBdev3", 00:43:08.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:08.730 "is_configured": false, 00:43:08.730 "data_offset": 0, 00:43:08.730 "data_size": 0 00:43:08.730 } 00:43:08.730 ] 00:43:08.730 }' 00:43:08.730 16:20:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:08.730 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:43:09.295 16:20:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:43:09.295 [2024-07-22 16:20:13.541307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:09.295 [2024-07-22 16:20:13.541383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:43:09.295 16:20:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:43:09.863 [2024-07-22 16:20:13.845404] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:43:09.863 [2024-07-22 16:20:13.845483] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:43:09.863 [2024-07-22 16:20:13.845499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:09.863 [2024-07-22 16:20:13.845519] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:09.863 [2024-07-22 16:20:13.845528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:09.863 [2024-07-22 16:20:13.845543] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:09.863 16:20:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:43:10.121 [2024-07-22 16:20:14.161852] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:10.121 BaseBdev1 00:43:10.121 16:20:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:43:10.121 16:20:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:43:10.121 16:20:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:10.121 16:20:14 -- common/autotest_common.sh@889 -- # local i 00:43:10.121 16:20:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:10.121 16:20:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:10.121 16:20:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:10.379 16:20:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:10.637 [ 00:43:10.637 { 00:43:10.637 "name": "BaseBdev1", 00:43:10.637 "aliases": [ 00:43:10.637 "5a1b1cef-bbd1-48f4-bd3a-98c609c188a7" 00:43:10.637 ], 00:43:10.637 "product_name": "Malloc disk", 00:43:10.637 "block_size": 512, 00:43:10.637 "num_blocks": 65536, 00:43:10.637 "uuid": "5a1b1cef-bbd1-48f4-bd3a-98c609c188a7", 00:43:10.637 "assigned_rate_limits": { 00:43:10.637 "rw_ios_per_sec": 0, 00:43:10.637 "rw_mbytes_per_sec": 0, 00:43:10.637 "r_mbytes_per_sec": 0, 00:43:10.637 "w_mbytes_per_sec": 0 00:43:10.637 }, 00:43:10.637 "claimed": true, 00:43:10.637 "claim_type": "exclusive_write", 00:43:10.637 "zoned": false, 00:43:10.637 "supported_io_types": { 00:43:10.637 "read": true, 00:43:10.637 "write": true, 00:43:10.637 "unmap": true, 00:43:10.637 "write_zeroes": true, 00:43:10.637 "flush": true, 00:43:10.637 "reset": true, 00:43:10.637 "compare": false, 00:43:10.637 "compare_and_write": false, 00:43:10.637 "abort": true, 00:43:10.637 "nvme_admin": false, 00:43:10.637 "nvme_io": false 00:43:10.637 }, 00:43:10.637 "memory_domains": [ 00:43:10.637 { 00:43:10.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:10.637 "dma_device_type": 2 00:43:10.637 } 00:43:10.637 ], 00:43:10.637 "driver_specific": {} 00:43:10.637 } 00:43:10.637 ] 00:43:10.637 16:20:14 -- common/autotest_common.sh@895 -- # return 0 00:43:10.637 16:20:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:10.637 16:20:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:10.638 16:20:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:10.896 16:20:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:10.896 "name": "Existed_Raid", 00:43:10.896 "uuid": "f8fe24cb-7ec7-4def-82cc-c147eb657bfe", 00:43:10.896 "strip_size_kb": 64, 00:43:10.896 "state": "configuring", 00:43:10.896 "raid_level": "raid5f", 00:43:10.896 "superblock": true, 00:43:10.896 "num_base_bdevs": 3, 00:43:10.896 "num_base_bdevs_discovered": 1, 00:43:10.896 "num_base_bdevs_operational": 3, 00:43:10.896 "base_bdevs_list": [ 00:43:10.896 { 00:43:10.896 "name": "BaseBdev1", 00:43:10.896 "uuid": "5a1b1cef-bbd1-48f4-bd3a-98c609c188a7", 00:43:10.896 "is_configured": true, 00:43:10.896 "data_offset": 2048, 00:43:10.896 "data_size": 63488 00:43:10.896 }, 00:43:10.896 { 00:43:10.896 "name": "BaseBdev2", 00:43:10.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:10.896 "is_configured": false, 00:43:10.896 "data_offset": 0, 00:43:10.896 "data_size": 0 00:43:10.896 }, 00:43:10.896 { 00:43:10.896 "name": "BaseBdev3", 00:43:10.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:10.896 "is_configured": false, 00:43:10.896 "data_offset": 0, 00:43:10.896 "data_size": 0 00:43:10.896 } 00:43:10.896 ] 00:43:10.896 }' 00:43:10.896 16:20:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:10.896 16:20:15 -- common/autotest_common.sh@10 -- # set +x 00:43:11.155 16:20:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:43:11.414 [2024-07-22 16:20:15.682664] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:43:11.414 [2024-07-22 16:20:15.682741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:43:11.672 16:20:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:43:11.672 16:20:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:43:11.943 16:20:16 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:43:12.202 BaseBdev1 00:43:12.202 16:20:16 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:43:12.202 16:20:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:43:12.202 16:20:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:12.202 16:20:16 -- common/autotest_common.sh@889 -- # local i 00:43:12.202 16:20:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:12.202 16:20:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:12.202 16:20:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:12.475 16:20:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:43:12.733 [ 00:43:12.733 { 00:43:12.733 "name": "BaseBdev1", 00:43:12.733 "aliases": [ 00:43:12.733 "624c37c9-33e1-41c2-b884-27722b28d601" 00:43:12.733 ], 00:43:12.733 "product_name": "Malloc disk", 00:43:12.733 "block_size": 512, 00:43:12.733 "num_blocks": 65536, 00:43:12.733 "uuid": "624c37c9-33e1-41c2-b884-27722b28d601", 00:43:12.733 "assigned_rate_limits": { 00:43:12.733 "rw_ios_per_sec": 0, 00:43:12.733 "rw_mbytes_per_sec": 0, 00:43:12.733 "r_mbytes_per_sec": 0, 00:43:12.733 "w_mbytes_per_sec": 0 00:43:12.733 }, 00:43:12.733 "claimed": false, 00:43:12.733 "zoned": false, 00:43:12.733 "supported_io_types": { 00:43:12.733 "read": true, 00:43:12.733 "write": true, 00:43:12.733 "unmap": true, 00:43:12.733 "write_zeroes": true, 00:43:12.733 "flush": true, 00:43:12.733 "reset": true, 00:43:12.733 "compare": false, 00:43:12.733 "compare_and_write": false, 00:43:12.733 "abort": true, 00:43:12.733 "nvme_admin": false, 00:43:12.733 "nvme_io": false 00:43:12.733 }, 00:43:12.733 "memory_domains": [ 00:43:12.733 { 00:43:12.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:12.733 "dma_device_type": 2 00:43:12.733 } 00:43:12.733 ], 00:43:12.733 "driver_specific": {} 00:43:12.733 } 00:43:12.733 ] 00:43:12.733 16:20:16 -- common/autotest_common.sh@895 -- # return 0 00:43:12.733 16:20:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:43:13.004 [2024-07-22 16:20:17.137709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:13.004 [2024-07-22 16:20:17.140123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:43:13.004 [2024-07-22 16:20:17.140178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:43:13.004 [2024-07-22 16:20:17.140194] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:43:13.004 [2024-07-22 16:20:17.140211] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:13.004 16:20:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:13.289 16:20:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:13.289 "name": "Existed_Raid", 00:43:13.289 "uuid": "bb0be57b-ee89-4d6c-aef9-bdc529f0e838", 00:43:13.289 "strip_size_kb": 64, 00:43:13.289 "state": "configuring", 00:43:13.289 "raid_level": "raid5f", 00:43:13.289 "superblock": true, 00:43:13.289 "num_base_bdevs": 3, 00:43:13.289 "num_base_bdevs_discovered": 1, 00:43:13.289 "num_base_bdevs_operational": 3, 00:43:13.289 "base_bdevs_list": [ 00:43:13.289 { 00:43:13.289 "name": "BaseBdev1", 00:43:13.289 "uuid": "624c37c9-33e1-41c2-b884-27722b28d601", 00:43:13.289 "is_configured": true, 00:43:13.289 "data_offset": 2048, 00:43:13.289 "data_size": 63488 00:43:13.289 }, 00:43:13.289 { 00:43:13.289 "name": "BaseBdev2", 00:43:13.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:13.289 "is_configured": false, 00:43:13.289 "data_offset": 0, 00:43:13.289 "data_size": 0 00:43:13.289 }, 00:43:13.289 { 00:43:13.289 "name": "BaseBdev3", 00:43:13.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:13.289 "is_configured": false, 00:43:13.289 "data_offset": 0, 00:43:13.289 "data_size": 0 00:43:13.289 } 00:43:13.289 ] 00:43:13.289 }' 00:43:13.289 16:20:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:13.289 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:43:13.547 16:20:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:43:13.805 [2024-07-22 16:20:18.072904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:13.805 BaseBdev2 00:43:14.063 16:20:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:43:14.063 16:20:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:43:14.063 16:20:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:14.063 16:20:18 -- common/autotest_common.sh@889 -- # local i 00:43:14.063 16:20:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:14.063 16:20:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:14.063 16:20:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:14.063 16:20:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:43:14.322 [ 00:43:14.322 { 00:43:14.322 "name": "BaseBdev2", 00:43:14.322 "aliases": [ 00:43:14.322 "88e35f9b-710f-4aaf-bacb-296fb8ef4311" 00:43:14.322 ], 00:43:14.322 "product_name": "Malloc disk", 00:43:14.322 "block_size": 512, 00:43:14.322 "num_blocks": 65536, 00:43:14.322 "uuid": "88e35f9b-710f-4aaf-bacb-296fb8ef4311", 00:43:14.322 "assigned_rate_limits": { 00:43:14.322 "rw_ios_per_sec": 0, 00:43:14.322 "rw_mbytes_per_sec": 0, 00:43:14.322 "r_mbytes_per_sec": 0, 00:43:14.322 "w_mbytes_per_sec": 0 00:43:14.322 }, 00:43:14.322 "claimed": true, 00:43:14.322 "claim_type": "exclusive_write", 00:43:14.322 "zoned": false, 00:43:14.322 "supported_io_types": { 00:43:14.322 "read": true, 00:43:14.322 "write": true, 00:43:14.322 "unmap": true, 00:43:14.322 "write_zeroes": true, 00:43:14.322 "flush": true, 00:43:14.322 "reset": true, 00:43:14.322 "compare": false, 00:43:14.322 "compare_and_write": false, 00:43:14.322 "abort": true, 00:43:14.322 "nvme_admin": false, 00:43:14.322 "nvme_io": false 00:43:14.322 }, 00:43:14.322 "memory_domains": [ 00:43:14.322 { 00:43:14.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:14.322 "dma_device_type": 2 00:43:14.322 } 00:43:14.322 ], 00:43:14.322 "driver_specific": {} 00:43:14.322 } 00:43:14.322 ] 00:43:14.322 16:20:18 -- common/autotest_common.sh@895 -- # return 0 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:14.322 16:20:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:14.580 16:20:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:14.580 "name": "Existed_Raid", 00:43:14.580 "uuid": "bb0be57b-ee89-4d6c-aef9-bdc529f0e838", 00:43:14.580 "strip_size_kb": 64, 00:43:14.580 "state": "configuring", 00:43:14.580 "raid_level": "raid5f", 00:43:14.580 "superblock": true, 00:43:14.580 "num_base_bdevs": 3, 00:43:14.580 "num_base_bdevs_discovered": 2, 00:43:14.580 "num_base_bdevs_operational": 3, 00:43:14.580 "base_bdevs_list": [ 00:43:14.580 { 00:43:14.580 "name": "BaseBdev1", 00:43:14.580 "uuid": "624c37c9-33e1-41c2-b884-27722b28d601", 00:43:14.580 "is_configured": true, 00:43:14.580 "data_offset": 2048, 00:43:14.580 "data_size": 63488 00:43:14.580 }, 00:43:14.580 { 00:43:14.580 "name": "BaseBdev2", 00:43:14.580 "uuid": "88e35f9b-710f-4aaf-bacb-296fb8ef4311", 00:43:14.580 "is_configured": true, 00:43:14.580 "data_offset": 2048, 00:43:14.580 "data_size": 63488 00:43:14.580 }, 00:43:14.580 { 00:43:14.580 "name": "BaseBdev3", 00:43:14.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:14.580 "is_configured": false, 00:43:14.580 "data_offset": 0, 00:43:14.580 "data_size": 0 00:43:14.580 } 00:43:14.580 ] 00:43:14.580 }' 00:43:14.580 16:20:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:14.580 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:43:15.147 16:20:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:43:15.405 [2024-07-22 16:20:19.437181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:15.405 [2024-07-22 16:20:19.437516] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:43:15.405 [2024-07-22 16:20:19.437544] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:15.405 [2024-07-22 16:20:19.437667] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:43:15.405 BaseBdev3 00:43:15.405 [2024-07-22 16:20:19.443141] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:43:15.405 [2024-07-22 16:20:19.443167] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:43:15.405 [2024-07-22 16:20:19.443379] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:15.405 16:20:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:43:15.405 16:20:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:43:15.405 16:20:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:43:15.405 16:20:19 -- common/autotest_common.sh@889 -- # local i 00:43:15.405 16:20:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:43:15.405 16:20:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:43:15.405 16:20:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:43:15.663 16:20:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:43:15.976 [ 00:43:15.976 { 00:43:15.976 "name": "BaseBdev3", 00:43:15.976 "aliases": [ 00:43:15.976 "f9b88a51-836a-4d48-84f1-3d44c4d26664" 00:43:15.976 ], 00:43:15.976 "product_name": "Malloc disk", 00:43:15.976 "block_size": 512, 00:43:15.976 "num_blocks": 65536, 00:43:15.976 "uuid": "f9b88a51-836a-4d48-84f1-3d44c4d26664", 00:43:15.976 "assigned_rate_limits": { 00:43:15.976 "rw_ios_per_sec": 0, 00:43:15.976 "rw_mbytes_per_sec": 0, 00:43:15.976 "r_mbytes_per_sec": 0, 00:43:15.976 "w_mbytes_per_sec": 0 00:43:15.976 }, 00:43:15.976 "claimed": true, 00:43:15.976 "claim_type": "exclusive_write", 00:43:15.976 "zoned": false, 00:43:15.976 "supported_io_types": { 00:43:15.976 "read": true, 00:43:15.976 "write": true, 00:43:15.976 "unmap": true, 00:43:15.976 "write_zeroes": true, 00:43:15.976 "flush": true, 00:43:15.976 "reset": true, 00:43:15.976 "compare": false, 00:43:15.976 "compare_and_write": false, 00:43:15.976 "abort": true, 00:43:15.976 "nvme_admin": false, 00:43:15.976 "nvme_io": false 00:43:15.976 }, 00:43:15.976 "memory_domains": [ 00:43:15.976 { 00:43:15.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:43:15.976 "dma_device_type": 2 00:43:15.976 } 00:43:15.976 ], 00:43:15.976 "driver_specific": {} 00:43:15.976 } 00:43:15.976 ] 00:43:15.976 16:20:20 -- common/autotest_common.sh@895 -- # return 0 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:15.976 16:20:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:16.234 16:20:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:16.234 "name": "Existed_Raid", 00:43:16.234 "uuid": "bb0be57b-ee89-4d6c-aef9-bdc529f0e838", 00:43:16.234 "strip_size_kb": 64, 00:43:16.234 "state": "online", 00:43:16.234 "raid_level": "raid5f", 00:43:16.234 "superblock": true, 00:43:16.234 "num_base_bdevs": 3, 00:43:16.234 "num_base_bdevs_discovered": 3, 00:43:16.234 "num_base_bdevs_operational": 3, 00:43:16.234 "base_bdevs_list": [ 00:43:16.234 { 00:43:16.234 "name": "BaseBdev1", 00:43:16.234 "uuid": "624c37c9-33e1-41c2-b884-27722b28d601", 00:43:16.234 "is_configured": true, 00:43:16.234 "data_offset": 2048, 00:43:16.234 "data_size": 63488 00:43:16.234 }, 00:43:16.234 { 00:43:16.234 "name": "BaseBdev2", 00:43:16.234 "uuid": "88e35f9b-710f-4aaf-bacb-296fb8ef4311", 00:43:16.234 "is_configured": true, 00:43:16.234 "data_offset": 2048, 00:43:16.234 "data_size": 63488 00:43:16.234 }, 00:43:16.234 { 00:43:16.234 "name": "BaseBdev3", 00:43:16.234 "uuid": "f9b88a51-836a-4d48-84f1-3d44c4d26664", 00:43:16.234 "is_configured": true, 00:43:16.235 "data_offset": 2048, 00:43:16.235 "data_size": 63488 00:43:16.235 } 00:43:16.235 ] 00:43:16.235 }' 00:43:16.235 16:20:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:16.235 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:43:16.493 16:20:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:43:16.751 [2024-07-22 16:20:20.817808] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:16.751 16:20:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:43:17.009 16:20:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:17.009 "name": "Existed_Raid", 00:43:17.009 "uuid": "bb0be57b-ee89-4d6c-aef9-bdc529f0e838", 00:43:17.009 "strip_size_kb": 64, 00:43:17.009 "state": "online", 00:43:17.009 "raid_level": "raid5f", 00:43:17.009 "superblock": true, 00:43:17.009 "num_base_bdevs": 3, 00:43:17.009 "num_base_bdevs_discovered": 2, 00:43:17.009 "num_base_bdevs_operational": 2, 00:43:17.009 "base_bdevs_list": [ 00:43:17.009 { 00:43:17.009 "name": null, 00:43:17.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:17.009 "is_configured": false, 00:43:17.009 "data_offset": 2048, 00:43:17.009 "data_size": 63488 00:43:17.009 }, 00:43:17.009 { 00:43:17.009 "name": "BaseBdev2", 00:43:17.009 "uuid": "88e35f9b-710f-4aaf-bacb-296fb8ef4311", 00:43:17.009 "is_configured": true, 00:43:17.009 "data_offset": 2048, 00:43:17.009 "data_size": 63488 00:43:17.009 }, 00:43:17.009 { 00:43:17.009 "name": "BaseBdev3", 00:43:17.009 "uuid": "f9b88a51-836a-4d48-84f1-3d44c4d26664", 00:43:17.009 "is_configured": true, 00:43:17.009 "data_offset": 2048, 00:43:17.009 "data_size": 63488 00:43:17.009 } 00:43:17.009 ] 00:43:17.009 }' 00:43:17.009 16:20:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:17.009 16:20:21 -- common/autotest_common.sh@10 -- # set +x 00:43:17.268 16:20:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:43:17.268 16:20:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:17.268 16:20:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:17.268 16:20:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:43:17.835 16:20:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:43:17.835 16:20:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:17.835 16:20:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:43:17.835 [2024-07-22 16:20:22.100836] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:43:17.835 [2024-07-22 16:20:22.101293] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:17.835 [2024-07-22 16:20:22.101408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:18.093 16:20:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:43:18.093 16:20:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:18.093 16:20:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:18.093 16:20:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:43:18.351 16:20:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:43:18.351 16:20:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:43:18.351 16:20:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:43:18.608 [2024-07-22 16:20:22.773090] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:43:18.608 [2024-07-22 16:20:22.773394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:43:18.931 16:20:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:43:18.931 16:20:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:43:18.931 16:20:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:18.931 16:20:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:43:18.931 16:20:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:43:18.931 16:20:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:43:18.931 16:20:23 -- bdev/bdev_raid.sh@287 -- # killprocess 84139 00:43:18.931 16:20:23 -- common/autotest_common.sh@926 -- # '[' -z 84139 ']' 00:43:18.931 16:20:23 -- common/autotest_common.sh@930 -- # kill -0 84139 00:43:18.931 16:20:23 -- common/autotest_common.sh@931 -- # uname 00:43:18.931 16:20:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:18.931 16:20:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84139 00:43:18.931 killing process with pid 84139 00:43:18.931 16:20:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:18.931 16:20:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:18.931 16:20:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84139' 00:43:18.931 16:20:23 -- common/autotest_common.sh@945 -- # kill 84139 00:43:18.931 16:20:23 -- common/autotest_common.sh@950 -- # wait 84139 00:43:18.931 [2024-07-22 16:20:23.162032] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:18.931 [2024-07-22 16:20:23.162198] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:20.306 16:20:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:43:20.306 00:43:20.306 real 0m13.094s 00:43:20.306 user 0m21.514s 00:43:20.306 sys 0m2.100s 00:43:20.306 16:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:20.306 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:43:20.306 ************************************ 00:43:20.306 END TEST raid5f_state_function_test_sb 00:43:20.306 ************************************ 00:43:20.306 16:20:24 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:43:20.306 16:20:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:43:20.306 16:20:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:20.306 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:43:20.564 ************************************ 00:43:20.564 START TEST raid5f_superblock_test 00:43:20.564 ************************************ 00:43:20.564 16:20:24 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@357 -- # raid_pid=84500 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@358 -- # waitforlisten 84500 /var/tmp/spdk-raid.sock 00:43:20.564 16:20:24 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:43:20.564 16:20:24 -- common/autotest_common.sh@819 -- # '[' -z 84500 ']' 00:43:20.564 16:20:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:43:20.564 16:20:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:20.564 16:20:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:43:20.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:43:20.564 16:20:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:20.564 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:43:20.564 [2024-07-22 16:20:24.657006] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:43:20.564 [2024-07-22 16:20:24.657443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84500 ] 00:43:20.564 [2024-07-22 16:20:24.837116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.822 [2024-07-22 16:20:25.090907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.081 [2024-07-22 16:20:25.311936] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:21.653 16:20:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:21.653 16:20:25 -- common/autotest_common.sh@852 -- # return 0 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:21.653 16:20:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:43:21.653 malloc1 00:43:21.931 16:20:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:21.931 [2024-07-22 16:20:26.178956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:21.931 [2024-07-22 16:20:26.179442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:21.931 [2024-07-22 16:20:26.179613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:43:21.931 [2024-07-22 16:20:26.179752] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:21.931 [2024-07-22 16:20:26.182678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:21.931 [2024-07-22 16:20:26.182854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:21.931 pt1 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:21.931 16:20:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:43:22.188 malloc2 00:43:22.446 16:20:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:22.703 [2024-07-22 16:20:26.725831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:22.703 [2024-07-22 16:20:26.725946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:22.703 [2024-07-22 16:20:26.725984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:43:22.703 [2024-07-22 16:20:26.725999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:22.703 [2024-07-22 16:20:26.728903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:22.703 [2024-07-22 16:20:26.728945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:22.703 pt2 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:43:22.703 16:20:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:43:22.962 malloc3 00:43:22.962 16:20:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:22.962 [2024-07-22 16:20:27.205422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:22.962 [2024-07-22 16:20:27.205526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:22.962 [2024-07-22 16:20:27.205563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:43:22.962 [2024-07-22 16:20:27.205578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:22.962 [2024-07-22 16:20:27.208872] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:22.962 [2024-07-22 16:20:27.208912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:22.962 pt3 00:43:22.962 16:20:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:43:22.962 16:20:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:43:22.962 16:20:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:43:23.220 [2024-07-22 16:20:27.421751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:23.220 [2024-07-22 16:20:27.424324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:23.220 [2024-07-22 16:20:27.424442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:23.220 [2024-07-22 16:20:27.424672] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:43:23.220 [2024-07-22 16:20:27.424694] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:23.220 [2024-07-22 16:20:27.424823] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:43:23.220 [2024-07-22 16:20:27.430038] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:43:23.220 [2024-07-22 16:20:27.430068] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:43:23.220 [2024-07-22 16:20:27.430334] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:23.220 16:20:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:23.478 16:20:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:23.478 "name": "raid_bdev1", 00:43:23.478 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:23.478 "strip_size_kb": 64, 00:43:23.478 "state": "online", 00:43:23.478 "raid_level": "raid5f", 00:43:23.478 "superblock": true, 00:43:23.478 "num_base_bdevs": 3, 00:43:23.478 "num_base_bdevs_discovered": 3, 00:43:23.478 "num_base_bdevs_operational": 3, 00:43:23.478 "base_bdevs_list": [ 00:43:23.478 { 00:43:23.478 "name": "pt1", 00:43:23.478 "uuid": "aaf22919-e5b0-522d-a50b-7c74dc95a406", 00:43:23.478 "is_configured": true, 00:43:23.478 "data_offset": 2048, 00:43:23.478 "data_size": 63488 00:43:23.478 }, 00:43:23.478 { 00:43:23.478 "name": "pt2", 00:43:23.478 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:23.478 "is_configured": true, 00:43:23.478 "data_offset": 2048, 00:43:23.478 "data_size": 63488 00:43:23.478 }, 00:43:23.478 { 00:43:23.478 "name": "pt3", 00:43:23.478 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:23.478 "is_configured": true, 00:43:23.478 "data_offset": 2048, 00:43:23.479 "data_size": 63488 00:43:23.479 } 00:43:23.479 ] 00:43:23.479 }' 00:43:23.479 16:20:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:23.479 16:20:27 -- common/autotest_common.sh@10 -- # set +x 00:43:24.045 16:20:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:43:24.045 16:20:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:43:24.045 [2024-07-22 16:20:28.304613] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:24.304 16:20:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0be6ac61-3e33-47d2-8618-209be71218f4 00:43:24.304 16:20:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 0be6ac61-3e33-47d2-8618-209be71218f4 ']' 00:43:24.304 16:20:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:24.304 [2024-07-22 16:20:28.532481] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:24.304 [2024-07-22 16:20:28.532561] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:24.304 [2024-07-22 16:20:28.532652] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:24.304 [2024-07-22 16:20:28.532736] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:24.304 [2024-07-22 16:20:28.532756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:43:24.304 16:20:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:24.304 16:20:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:43:24.562 16:20:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:43:24.562 16:20:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:43:24.562 16:20:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:43:24.562 16:20:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:43:24.820 16:20:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:43:24.820 16:20:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:43:25.078 16:20:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:43:25.078 16:20:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:43:25.336 16:20:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:43:25.336 16:20:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:43:25.595 16:20:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:43:25.595 16:20:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:43:25.595 16:20:29 -- common/autotest_common.sh@640 -- # local es=0 00:43:25.595 16:20:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:43:25.595 16:20:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:25.596 16:20:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:25.596 16:20:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:25.596 16:20:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:25.596 16:20:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:25.596 16:20:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:43:25.596 16:20:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:43:25.596 16:20:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:43:25.596 16:20:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:43:25.853 [2024-07-22 16:20:30.012862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:43:25.853 [2024-07-22 16:20:30.015522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:43:25.853 [2024-07-22 16:20:30.015583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:43:25.853 [2024-07-22 16:20:30.015653] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:43:25.854 [2024-07-22 16:20:30.015722] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:43:25.854 [2024-07-22 16:20:30.015758] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:43:25.854 [2024-07-22 16:20:30.015781] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:25.854 [2024-07-22 16:20:30.015797] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:43:25.854 request: 00:43:25.854 { 00:43:25.854 "name": "raid_bdev1", 00:43:25.854 "raid_level": "raid5f", 00:43:25.854 "base_bdevs": [ 00:43:25.854 "malloc1", 00:43:25.854 "malloc2", 00:43:25.854 "malloc3" 00:43:25.854 ], 00:43:25.854 "superblock": false, 00:43:25.854 "strip_size_kb": 64, 00:43:25.854 "method": "bdev_raid_create", 00:43:25.854 "req_id": 1 00:43:25.854 } 00:43:25.854 Got JSON-RPC error response 00:43:25.854 response: 00:43:25.854 { 00:43:25.854 "code": -17, 00:43:25.854 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:43:25.854 } 00:43:25.854 16:20:30 -- common/autotest_common.sh@643 -- # es=1 00:43:25.854 16:20:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:43:25.854 16:20:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:43:25.854 16:20:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:43:25.854 16:20:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:25.854 16:20:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:43:26.111 16:20:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:43:26.111 16:20:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:43:26.111 16:20:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:26.382 [2024-07-22 16:20:30.588915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:26.382 [2024-07-22 16:20:30.589325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:26.382 [2024-07-22 16:20:30.589399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:43:26.382 [2024-07-22 16:20:30.589535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:26.382 [2024-07-22 16:20:30.592277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:26.382 [2024-07-22 16:20:30.592444] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:26.382 [2024-07-22 16:20:30.592681] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:43:26.382 [2024-07-22 16:20:30.592883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:26.382 pt1 00:43:26.382 16:20:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:43:26.382 16:20:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:26.382 16:20:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:26.382 16:20:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:26.382 16:20:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:26.383 16:20:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:26.641 16:20:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:26.641 "name": "raid_bdev1", 00:43:26.641 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:26.641 "strip_size_kb": 64, 00:43:26.641 "state": "configuring", 00:43:26.641 "raid_level": "raid5f", 00:43:26.641 "superblock": true, 00:43:26.641 "num_base_bdevs": 3, 00:43:26.641 "num_base_bdevs_discovered": 1, 00:43:26.641 "num_base_bdevs_operational": 3, 00:43:26.641 "base_bdevs_list": [ 00:43:26.641 { 00:43:26.641 "name": "pt1", 00:43:26.641 "uuid": "aaf22919-e5b0-522d-a50b-7c74dc95a406", 00:43:26.641 "is_configured": true, 00:43:26.641 "data_offset": 2048, 00:43:26.641 "data_size": 63488 00:43:26.641 }, 00:43:26.641 { 00:43:26.641 "name": null, 00:43:26.641 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:26.641 "is_configured": false, 00:43:26.641 "data_offset": 2048, 00:43:26.641 "data_size": 63488 00:43:26.641 }, 00:43:26.641 { 00:43:26.641 "name": null, 00:43:26.641 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:26.641 "is_configured": false, 00:43:26.641 "data_offset": 2048, 00:43:26.641 "data_size": 63488 00:43:26.641 } 00:43:26.641 ] 00:43:26.641 }' 00:43:26.641 16:20:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:26.641 16:20:30 -- common/autotest_common.sh@10 -- # set +x 00:43:27.208 16:20:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:43:27.208 16:20:31 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:27.466 [2024-07-22 16:20:31.541516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:27.466 [2024-07-22 16:20:31.541836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:27.466 [2024-07-22 16:20:31.541878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:43:27.466 [2024-07-22 16:20:31.541899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:27.466 [2024-07-22 16:20:31.542505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:27.466 [2024-07-22 16:20:31.542533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:27.466 [2024-07-22 16:20:31.542628] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:43:27.466 [2024-07-22 16:20:31.542658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:27.466 pt2 00:43:27.466 16:20:31 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:43:27.724 [2024-07-22 16:20:31.865619] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:27.724 16:20:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:27.982 16:20:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:27.982 "name": "raid_bdev1", 00:43:27.982 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:27.982 "strip_size_kb": 64, 00:43:27.982 "state": "configuring", 00:43:27.982 "raid_level": "raid5f", 00:43:27.982 "superblock": true, 00:43:27.982 "num_base_bdevs": 3, 00:43:27.982 "num_base_bdevs_discovered": 1, 00:43:27.982 "num_base_bdevs_operational": 3, 00:43:27.982 "base_bdevs_list": [ 00:43:27.982 { 00:43:27.982 "name": "pt1", 00:43:27.982 "uuid": "aaf22919-e5b0-522d-a50b-7c74dc95a406", 00:43:27.982 "is_configured": true, 00:43:27.982 "data_offset": 2048, 00:43:27.982 "data_size": 63488 00:43:27.982 }, 00:43:27.982 { 00:43:27.982 "name": null, 00:43:27.982 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:27.982 "is_configured": false, 00:43:27.982 "data_offset": 2048, 00:43:27.982 "data_size": 63488 00:43:27.982 }, 00:43:27.982 { 00:43:27.982 "name": null, 00:43:27.982 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:27.982 "is_configured": false, 00:43:27.982 "data_offset": 2048, 00:43:27.982 "data_size": 63488 00:43:27.982 } 00:43:27.982 ] 00:43:27.982 }' 00:43:27.982 16:20:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:27.982 16:20:32 -- common/autotest_common.sh@10 -- # set +x 00:43:28.549 16:20:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:43:28.549 16:20:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:43:28.549 16:20:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:28.549 [2024-07-22 16:20:32.821840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:28.549 [2024-07-22 16:20:32.821940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:28.549 [2024-07-22 16:20:32.821977] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:43:28.549 [2024-07-22 16:20:32.822009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:28.549 [2024-07-22 16:20:32.822568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:28.549 [2024-07-22 16:20:32.822599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:28.549 [2024-07-22 16:20:32.822708] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:43:28.549 [2024-07-22 16:20:32.822736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:28.808 pt2 00:43:28.808 16:20:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:43:28.808 16:20:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:43:28.808 16:20:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:29.065 [2024-07-22 16:20:33.113933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:29.065 [2024-07-22 16:20:33.114271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:29.065 [2024-07-22 16:20:33.114435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:43:29.065 [2024-07-22 16:20:33.114558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:29.065 [2024-07-22 16:20:33.115153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:29.065 [2024-07-22 16:20:33.115291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:29.065 [2024-07-22 16:20:33.115525] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:43:29.065 [2024-07-22 16:20:33.115659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:29.065 [2024-07-22 16:20:33.115874] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:43:29.065 [2024-07-22 16:20:33.116010] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:29.065 [2024-07-22 16:20:33.116210] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:43:29.065 [2024-07-22 16:20:33.121498] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:43:29.065 [2024-07-22 16:20:33.121644] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:43:29.065 [2024-07-22 16:20:33.122197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:29.065 pt3 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:29.065 16:20:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:29.324 16:20:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:29.324 "name": "raid_bdev1", 00:43:29.324 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:29.324 "strip_size_kb": 64, 00:43:29.324 "state": "online", 00:43:29.324 "raid_level": "raid5f", 00:43:29.324 "superblock": true, 00:43:29.324 "num_base_bdevs": 3, 00:43:29.324 "num_base_bdevs_discovered": 3, 00:43:29.324 "num_base_bdevs_operational": 3, 00:43:29.324 "base_bdevs_list": [ 00:43:29.324 { 00:43:29.324 "name": "pt1", 00:43:29.324 "uuid": "aaf22919-e5b0-522d-a50b-7c74dc95a406", 00:43:29.324 "is_configured": true, 00:43:29.324 "data_offset": 2048, 00:43:29.324 "data_size": 63488 00:43:29.324 }, 00:43:29.324 { 00:43:29.324 "name": "pt2", 00:43:29.324 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:29.324 "is_configured": true, 00:43:29.324 "data_offset": 2048, 00:43:29.324 "data_size": 63488 00:43:29.324 }, 00:43:29.324 { 00:43:29.324 "name": "pt3", 00:43:29.324 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:29.324 "is_configured": true, 00:43:29.324 "data_offset": 2048, 00:43:29.324 "data_size": 63488 00:43:29.324 } 00:43:29.324 ] 00:43:29.324 }' 00:43:29.324 16:20:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:29.324 16:20:33 -- common/autotest_common.sh@10 -- # set +x 00:43:29.582 16:20:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:43:29.582 16:20:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:43:29.841 [2024-07-22 16:20:34.040770] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:29.841 16:20:34 -- bdev/bdev_raid.sh@430 -- # '[' 0be6ac61-3e33-47d2-8618-209be71218f4 '!=' 0be6ac61-3e33-47d2-8618-209be71218f4 ']' 00:43:29.841 16:20:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:43:29.841 16:20:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:43:29.841 16:20:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:43:29.841 16:20:34 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:43:30.101 [2024-07-22 16:20:34.316669] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:30.101 16:20:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:30.667 16:20:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:30.667 "name": "raid_bdev1", 00:43:30.667 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:30.667 "strip_size_kb": 64, 00:43:30.667 "state": "online", 00:43:30.667 "raid_level": "raid5f", 00:43:30.667 "superblock": true, 00:43:30.667 "num_base_bdevs": 3, 00:43:30.667 "num_base_bdevs_discovered": 2, 00:43:30.668 "num_base_bdevs_operational": 2, 00:43:30.668 "base_bdevs_list": [ 00:43:30.668 { 00:43:30.668 "name": null, 00:43:30.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:30.668 "is_configured": false, 00:43:30.668 "data_offset": 2048, 00:43:30.668 "data_size": 63488 00:43:30.668 }, 00:43:30.668 { 00:43:30.668 "name": "pt2", 00:43:30.668 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:30.668 "is_configured": true, 00:43:30.668 "data_offset": 2048, 00:43:30.668 "data_size": 63488 00:43:30.668 }, 00:43:30.668 { 00:43:30.668 "name": "pt3", 00:43:30.668 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:30.668 "is_configured": true, 00:43:30.668 "data_offset": 2048, 00:43:30.668 "data_size": 63488 00:43:30.668 } 00:43:30.668 ] 00:43:30.668 }' 00:43:30.668 16:20:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:30.668 16:20:34 -- common/autotest_common.sh@10 -- # set +x 00:43:30.926 16:20:34 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:31.185 [2024-07-22 16:20:35.221029] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:31.185 [2024-07-22 16:20:35.221093] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:31.185 [2024-07-22 16:20:35.221200] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:31.185 [2024-07-22 16:20:35.221281] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:31.185 [2024-07-22 16:20:35.221302] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:43:31.185 16:20:35 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:31.185 16:20:35 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:43:31.444 16:20:35 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:43:31.444 16:20:35 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:43:31.444 16:20:35 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:43:31.444 16:20:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:43:31.444 16:20:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:43:31.702 16:20:35 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:43:31.702 16:20:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:43:31.702 16:20:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:43:31.960 16:20:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:43:31.960 16:20:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:43:31.960 16:20:36 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:43:31.960 16:20:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:43:31.960 16:20:36 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:32.219 [2024-07-22 16:20:36.241223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:32.219 [2024-07-22 16:20:36.241333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:32.219 [2024-07-22 16:20:36.241396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:43:32.219 [2024-07-22 16:20:36.241416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:32.219 [2024-07-22 16:20:36.244297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:32.219 [2024-07-22 16:20:36.244561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:32.219 [2024-07-22 16:20:36.244702] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:43:32.219 [2024-07-22 16:20:36.244763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:32.219 pt2 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:32.219 "name": "raid_bdev1", 00:43:32.219 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:32.219 "strip_size_kb": 64, 00:43:32.219 "state": "configuring", 00:43:32.219 "raid_level": "raid5f", 00:43:32.219 "superblock": true, 00:43:32.219 "num_base_bdevs": 3, 00:43:32.219 "num_base_bdevs_discovered": 1, 00:43:32.219 "num_base_bdevs_operational": 2, 00:43:32.219 "base_bdevs_list": [ 00:43:32.219 { 00:43:32.219 "name": null, 00:43:32.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:32.219 "is_configured": false, 00:43:32.219 "data_offset": 2048, 00:43:32.219 "data_size": 63488 00:43:32.219 }, 00:43:32.219 { 00:43:32.219 "name": "pt2", 00:43:32.219 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:32.219 "is_configured": true, 00:43:32.219 "data_offset": 2048, 00:43:32.219 "data_size": 63488 00:43:32.219 }, 00:43:32.219 { 00:43:32.219 "name": null, 00:43:32.219 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:32.219 "is_configured": false, 00:43:32.219 "data_offset": 2048, 00:43:32.219 "data_size": 63488 00:43:32.219 } 00:43:32.219 ] 00:43:32.219 }' 00:43:32.219 16:20:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:32.219 16:20:36 -- common/autotest_common.sh@10 -- # set +x 00:43:32.785 16:20:36 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:43:32.785 16:20:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:43:32.785 16:20:36 -- bdev/bdev_raid.sh@462 -- # i=2 00:43:32.785 16:20:36 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:33.043 [2024-07-22 16:20:37.097483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:33.043 [2024-07-22 16:20:37.097597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:33.043 [2024-07-22 16:20:37.097633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:43:33.043 [2024-07-22 16:20:37.097653] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:33.043 [2024-07-22 16:20:37.098217] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:33.043 [2024-07-22 16:20:37.098247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:33.043 [2024-07-22 16:20:37.098355] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:43:33.043 [2024-07-22 16:20:37.098389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:33.043 [2024-07-22 16:20:37.098531] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:43:33.043 [2024-07-22 16:20:37.098551] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:33.043 [2024-07-22 16:20:37.098646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:43:33.043 [2024-07-22 16:20:37.103828] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:43:33.043 [2024-07-22 16:20:37.103863] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:43:33.043 pt3 00:43:33.043 [2024-07-22 16:20:37.104565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:33.043 16:20:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:33.301 16:20:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:33.301 "name": "raid_bdev1", 00:43:33.301 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:33.301 "strip_size_kb": 64, 00:43:33.301 "state": "online", 00:43:33.301 "raid_level": "raid5f", 00:43:33.301 "superblock": true, 00:43:33.301 "num_base_bdevs": 3, 00:43:33.301 "num_base_bdevs_discovered": 2, 00:43:33.301 "num_base_bdevs_operational": 2, 00:43:33.301 "base_bdevs_list": [ 00:43:33.301 { 00:43:33.301 "name": null, 00:43:33.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:33.301 "is_configured": false, 00:43:33.301 "data_offset": 2048, 00:43:33.301 "data_size": 63488 00:43:33.301 }, 00:43:33.301 { 00:43:33.301 "name": "pt2", 00:43:33.301 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:33.301 "is_configured": true, 00:43:33.301 "data_offset": 2048, 00:43:33.301 "data_size": 63488 00:43:33.301 }, 00:43:33.301 { 00:43:33.301 "name": "pt3", 00:43:33.301 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:33.301 "is_configured": true, 00:43:33.301 "data_offset": 2048, 00:43:33.301 "data_size": 63488 00:43:33.301 } 00:43:33.301 ] 00:43:33.301 }' 00:43:33.301 16:20:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:33.301 16:20:37 -- common/autotest_common.sh@10 -- # set +x 00:43:33.559 16:20:37 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:43:33.559 16:20:37 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:33.816 [2024-07-22 16:20:38.035291] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:33.816 [2024-07-22 16:20:38.035336] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:33.816 [2024-07-22 16:20:38.035441] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:33.816 [2024-07-22 16:20:38.035521] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:33.816 [2024-07-22 16:20:38.035535] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:43:33.816 16:20:38 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:43:33.816 16:20:38 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:34.083 16:20:38 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:43:34.083 16:20:38 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:43:34.083 16:20:38 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:43:34.349 [2024-07-22 16:20:38.463387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:43:34.349 [2024-07-22 16:20:38.463682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:34.349 [2024-07-22 16:20:38.463762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:43:34.349 [2024-07-22 16:20:38.463924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:34.349 [2024-07-22 16:20:38.466792] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:34.349 [2024-07-22 16:20:38.466846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:43:34.349 [2024-07-22 16:20:38.466960] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:43:34.349 [2024-07-22 16:20:38.467032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:43:34.349 pt1 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:34.349 16:20:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:34.606 16:20:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:34.606 "name": "raid_bdev1", 00:43:34.606 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:34.606 "strip_size_kb": 64, 00:43:34.606 "state": "configuring", 00:43:34.606 "raid_level": "raid5f", 00:43:34.606 "superblock": true, 00:43:34.606 "num_base_bdevs": 3, 00:43:34.606 "num_base_bdevs_discovered": 1, 00:43:34.606 "num_base_bdevs_operational": 3, 00:43:34.606 "base_bdevs_list": [ 00:43:34.606 { 00:43:34.606 "name": "pt1", 00:43:34.606 "uuid": "aaf22919-e5b0-522d-a50b-7c74dc95a406", 00:43:34.606 "is_configured": true, 00:43:34.606 "data_offset": 2048, 00:43:34.606 "data_size": 63488 00:43:34.606 }, 00:43:34.606 { 00:43:34.606 "name": null, 00:43:34.606 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:34.606 "is_configured": false, 00:43:34.606 "data_offset": 2048, 00:43:34.606 "data_size": 63488 00:43:34.606 }, 00:43:34.606 { 00:43:34.606 "name": null, 00:43:34.606 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:34.606 "is_configured": false, 00:43:34.606 "data_offset": 2048, 00:43:34.606 "data_size": 63488 00:43:34.606 } 00:43:34.606 ] 00:43:34.606 }' 00:43:34.606 16:20:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:34.606 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:43:35.173 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:43:35.173 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:43:35.173 16:20:39 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:43:35.430 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:43:35.430 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:43:35.430 16:20:39 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:43:35.688 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:43:35.688 16:20:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:43:35.688 16:20:39 -- bdev/bdev_raid.sh@489 -- # i=2 00:43:35.688 16:20:39 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:43:35.947 [2024-07-22 16:20:39.967827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:43:35.947 [2024-07-22 16:20:39.968257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:35.947 [2024-07-22 16:20:39.968315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:43:35.947 [2024-07-22 16:20:39.968333] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:35.947 [2024-07-22 16:20:39.968911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:35.947 [2024-07-22 16:20:39.968949] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:43:35.947 [2024-07-22 16:20:39.969090] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:43:35.947 [2024-07-22 16:20:39.969109] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:43:35.947 [2024-07-22 16:20:39.969127] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:35.947 [2024-07-22 16:20:39.969154] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:43:35.947 [2024-07-22 16:20:39.969232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:43:35.947 pt3 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:35.947 16:20:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:36.205 16:20:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:36.205 "name": "raid_bdev1", 00:43:36.205 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:36.205 "strip_size_kb": 64, 00:43:36.205 "state": "configuring", 00:43:36.205 "raid_level": "raid5f", 00:43:36.205 "superblock": true, 00:43:36.205 "num_base_bdevs": 3, 00:43:36.205 "num_base_bdevs_discovered": 1, 00:43:36.205 "num_base_bdevs_operational": 2, 00:43:36.205 "base_bdevs_list": [ 00:43:36.205 { 00:43:36.205 "name": null, 00:43:36.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:36.205 "is_configured": false, 00:43:36.205 "data_offset": 2048, 00:43:36.205 "data_size": 63488 00:43:36.205 }, 00:43:36.205 { 00:43:36.205 "name": null, 00:43:36.205 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:36.205 "is_configured": false, 00:43:36.205 "data_offset": 2048, 00:43:36.205 "data_size": 63488 00:43:36.205 }, 00:43:36.205 { 00:43:36.205 "name": "pt3", 00:43:36.205 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:36.205 "is_configured": true, 00:43:36.205 "data_offset": 2048, 00:43:36.205 "data_size": 63488 00:43:36.205 } 00:43:36.205 ] 00:43:36.205 }' 00:43:36.205 16:20:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:36.205 16:20:40 -- common/autotest_common.sh@10 -- # set +x 00:43:36.463 16:20:40 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:43:36.463 16:20:40 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:43:36.463 16:20:40 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:43:36.723 [2024-07-22 16:20:40.812122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:43:36.723 [2024-07-22 16:20:40.812627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:36.723 [2024-07-22 16:20:40.812674] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:43:36.723 [2024-07-22 16:20:40.812693] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:36.723 [2024-07-22 16:20:40.813294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:36.723 [2024-07-22 16:20:40.813334] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:43:36.723 [2024-07-22 16:20:40.813448] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:43:36.723 [2024-07-22 16:20:40.813486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:43:36.723 [2024-07-22 16:20:40.813613] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:43:36.723 [2024-07-22 16:20:40.813632] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:43:36.723 [2024-07-22 16:20:40.813718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:43:36.723 pt2 00:43:36.723 [2024-07-22 16:20:40.818411] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:43:36.723 [2024-07-22 16:20:40.818435] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:43:36.723 [2024-07-22 16:20:40.818692] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:36.723 16:20:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:36.724 16:20:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:36.983 16:20:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:36.983 "name": "raid_bdev1", 00:43:36.983 "uuid": "0be6ac61-3e33-47d2-8618-209be71218f4", 00:43:36.983 "strip_size_kb": 64, 00:43:36.983 "state": "online", 00:43:36.983 "raid_level": "raid5f", 00:43:36.983 "superblock": true, 00:43:36.983 "num_base_bdevs": 3, 00:43:36.983 "num_base_bdevs_discovered": 2, 00:43:36.983 "num_base_bdevs_operational": 2, 00:43:36.983 "base_bdevs_list": [ 00:43:36.983 { 00:43:36.983 "name": null, 00:43:36.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:36.983 "is_configured": false, 00:43:36.983 "data_offset": 2048, 00:43:36.983 "data_size": 63488 00:43:36.983 }, 00:43:36.983 { 00:43:36.983 "name": "pt2", 00:43:36.983 "uuid": "508dd763-4750-56f3-8407-6ce21c48ff8f", 00:43:36.983 "is_configured": true, 00:43:36.983 "data_offset": 2048, 00:43:36.983 "data_size": 63488 00:43:36.983 }, 00:43:36.983 { 00:43:36.983 "name": "pt3", 00:43:36.983 "uuid": "fe6048ff-f055-5a00-9216-d2f5884a177d", 00:43:36.983 "is_configured": true, 00:43:36.983 "data_offset": 2048, 00:43:36.983 "data_size": 63488 00:43:36.983 } 00:43:36.983 ] 00:43:36.983 }' 00:43:36.983 16:20:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:36.983 16:20:41 -- common/autotest_common.sh@10 -- # set +x 00:43:37.242 16:20:41 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:43:37.242 16:20:41 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:43:37.500 [2024-07-22 16:20:41.680964] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:37.500 16:20:41 -- bdev/bdev_raid.sh@506 -- # '[' 0be6ac61-3e33-47d2-8618-209be71218f4 '!=' 0be6ac61-3e33-47d2-8618-209be71218f4 ']' 00:43:37.500 16:20:41 -- bdev/bdev_raid.sh@511 -- # killprocess 84500 00:43:37.500 16:20:41 -- common/autotest_common.sh@926 -- # '[' -z 84500 ']' 00:43:37.500 16:20:41 -- common/autotest_common.sh@930 -- # kill -0 84500 00:43:37.500 16:20:41 -- common/autotest_common.sh@931 -- # uname 00:43:37.500 16:20:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:37.500 16:20:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84500 00:43:37.500 killing process with pid 84500 00:43:37.500 16:20:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:37.500 16:20:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:37.500 16:20:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84500' 00:43:37.500 16:20:41 -- common/autotest_common.sh@945 -- # kill 84500 00:43:37.500 [2024-07-22 16:20:41.741295] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:37.500 16:20:41 -- common/autotest_common.sh@950 -- # wait 84500 00:43:37.500 [2024-07-22 16:20:41.741397] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:37.501 [2024-07-22 16:20:41.741496] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:37.501 [2024-07-22 16:20:41.741509] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:43:37.759 [2024-07-22 16:20:42.013964] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:43:39.657 ************************************ 00:43:39.657 END TEST raid5f_superblock_test 00:43:39.657 ************************************ 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:43:39.657 00:43:39.657 real 0m18.868s 00:43:39.657 user 0m32.180s 00:43:39.657 sys 0m3.073s 00:43:39.657 16:20:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:39.657 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:43:39.657 16:20:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:43:39.657 16:20:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:43:39.657 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:43:39.657 ************************************ 00:43:39.657 START TEST raid5f_rebuild_test 00:43:39.657 ************************************ 00:43:39.657 16:20:43 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@544 -- # raid_pid=85072 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:43:39.657 16:20:43 -- bdev/bdev_raid.sh@545 -- # waitforlisten 85072 /var/tmp/spdk-raid.sock 00:43:39.657 16:20:43 -- common/autotest_common.sh@819 -- # '[' -z 85072 ']' 00:43:39.657 16:20:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:43:39.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:43:39.657 16:20:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:43:39.657 16:20:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:43:39.657 16:20:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:43:39.657 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:43:39.657 [2024-07-22 16:20:43.571494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:43:39.657 I/O size of 3145728 is greater than zero copy threshold (65536). 00:43:39.657 Zero copy mechanism will not be used. 00:43:39.657 [2024-07-22 16:20:43.572050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85072 ] 00:43:39.657 [2024-07-22 16:20:43.727294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:39.916 [2024-07-22 16:20:44.000752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:40.175 [2024-07-22 16:20:44.232333] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:43:40.434 16:20:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:43:40.434 16:20:44 -- common/autotest_common.sh@852 -- # return 0 00:43:40.434 16:20:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:40.434 16:20:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:43:40.434 16:20:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:43:40.693 BaseBdev1 00:43:40.693 16:20:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:40.693 16:20:44 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:43:40.693 16:20:44 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:43:40.950 BaseBdev2 00:43:40.950 16:20:45 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:43:40.950 16:20:45 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:43:40.950 16:20:45 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:43:41.208 BaseBdev3 00:43:41.208 16:20:45 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:43:41.467 spare_malloc 00:43:41.467 16:20:45 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:43:41.726 spare_delay 00:43:41.726 16:20:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:43:41.985 [2024-07-22 16:20:46.046195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:43:41.985 [2024-07-22 16:20:46.046578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:43:41.985 [2024-07-22 16:20:46.046627] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:43:41.985 [2024-07-22 16:20:46.046649] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:43:41.985 [2024-07-22 16:20:46.049838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:43:41.985 spare 00:43:41.985 [2024-07-22 16:20:46.050064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:43:41.985 16:20:46 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:43:42.243 [2024-07-22 16:20:46.322494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:43:42.243 [2024-07-22 16:20:46.325956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:43:42.243 [2024-07-22 16:20:46.326068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:43:42.243 [2024-07-22 16:20:46.326206] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:43:42.243 [2024-07-22 16:20:46.326223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:43:42.243 [2024-07-22 16:20:46.326407] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:43:42.243 [2024-07-22 16:20:46.332163] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:43:42.243 [2024-07-22 16:20:46.332219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:43:42.243 [2024-07-22 16:20:46.332592] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:42.243 16:20:46 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:42.244 16:20:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:42.502 16:20:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:42.502 "name": "raid_bdev1", 00:43:42.502 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:42.502 "strip_size_kb": 64, 00:43:42.502 "state": "online", 00:43:42.502 "raid_level": "raid5f", 00:43:42.502 "superblock": false, 00:43:42.502 "num_base_bdevs": 3, 00:43:42.502 "num_base_bdevs_discovered": 3, 00:43:42.502 "num_base_bdevs_operational": 3, 00:43:42.502 "base_bdevs_list": [ 00:43:42.502 { 00:43:42.502 "name": "BaseBdev1", 00:43:42.502 "uuid": "735a9229-23de-4f87-82b9-ef0070a7bfdc", 00:43:42.502 "is_configured": true, 00:43:42.502 "data_offset": 0, 00:43:42.502 "data_size": 65536 00:43:42.502 }, 00:43:42.502 { 00:43:42.502 "name": "BaseBdev2", 00:43:42.502 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:42.502 "is_configured": true, 00:43:42.502 "data_offset": 0, 00:43:42.502 "data_size": 65536 00:43:42.502 }, 00:43:42.502 { 00:43:42.502 "name": "BaseBdev3", 00:43:42.502 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:42.502 "is_configured": true, 00:43:42.502 "data_offset": 0, 00:43:42.502 "data_size": 65536 00:43:42.502 } 00:43:42.502 ] 00:43:42.502 }' 00:43:42.502 16:20:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:42.502 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:43:43.067 16:20:47 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:43:43.068 16:20:47 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:43:43.068 [2024-07-22 16:20:47.307342] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:43:43.068 16:20:47 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:43:43.068 16:20:47 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:43:43.068 16:20:47 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:43.326 16:20:47 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:43:43.326 16:20:47 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:43:43.326 16:20:47 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:43:43.326 16:20:47 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@12 -- # local i 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:43.326 16:20:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:43:43.585 [2024-07-22 16:20:47.795522] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:43:43.585 /dev/nbd0 00:43:43.585 16:20:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:43.585 16:20:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:43.585 16:20:47 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:43:43.585 16:20:47 -- common/autotest_common.sh@857 -- # local i 00:43:43.585 16:20:47 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:43:43.585 16:20:47 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:43:43.585 16:20:47 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:43:43.585 16:20:47 -- common/autotest_common.sh@861 -- # break 00:43:43.585 16:20:47 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:43:43.585 16:20:47 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:43:43.585 16:20:47 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:43.585 1+0 records in 00:43:43.585 1+0 records out 00:43:43.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294732 s, 13.9 MB/s 00:43:43.585 16:20:47 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:43.585 16:20:47 -- common/autotest_common.sh@874 -- # size=4096 00:43:43.585 16:20:47 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:43.585 16:20:47 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:43:43.585 16:20:47 -- common/autotest_common.sh@877 -- # return 0 00:43:43.585 16:20:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:43.585 16:20:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:43.585 16:20:47 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:43:43.585 16:20:47 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:43:43.585 16:20:47 -- bdev/bdev_raid.sh@582 -- # echo 128 00:43:43.585 16:20:47 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:43:44.150 512+0 records in 00:43:44.150 512+0 records out 00:43:44.150 67108864 bytes (67 MB, 64 MiB) copied, 0.505388 s, 133 MB/s 00:43:44.150 16:20:48 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@51 -- # local i 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:44.150 16:20:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:43:44.409 [2024-07-22 16:20:48.662756] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@41 -- # break 00:43:44.409 16:20:48 -- bdev/nbd_common.sh@45 -- # return 0 00:43:44.409 16:20:48 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:43:44.667 [2024-07-22 16:20:48.893543] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:44.667 16:20:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:44.668 16:20:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:44.668 16:20:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:44.668 16:20:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:44.668 16:20:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:45.237 16:20:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:45.237 "name": "raid_bdev1", 00:43:45.237 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:45.237 "strip_size_kb": 64, 00:43:45.237 "state": "online", 00:43:45.237 "raid_level": "raid5f", 00:43:45.237 "superblock": false, 00:43:45.237 "num_base_bdevs": 3, 00:43:45.237 "num_base_bdevs_discovered": 2, 00:43:45.237 "num_base_bdevs_operational": 2, 00:43:45.237 "base_bdevs_list": [ 00:43:45.237 { 00:43:45.237 "name": null, 00:43:45.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:45.237 "is_configured": false, 00:43:45.237 "data_offset": 0, 00:43:45.237 "data_size": 65536 00:43:45.237 }, 00:43:45.237 { 00:43:45.237 "name": "BaseBdev2", 00:43:45.237 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:45.237 "is_configured": true, 00:43:45.237 "data_offset": 0, 00:43:45.237 "data_size": 65536 00:43:45.237 }, 00:43:45.237 { 00:43:45.237 "name": "BaseBdev3", 00:43:45.237 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:45.237 "is_configured": true, 00:43:45.237 "data_offset": 0, 00:43:45.237 "data_size": 65536 00:43:45.237 } 00:43:45.237 ] 00:43:45.237 }' 00:43:45.237 16:20:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:45.237 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:43:45.496 16:20:49 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:43:45.755 [2024-07-22 16:20:49.833785] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:43:45.755 [2024-07-22 16:20:49.833863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:45.755 [2024-07-22 16:20:49.848864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002af30 00:43:45.755 [2024-07-22 16:20:49.856946] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:45.755 16:20:49 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:46.691 16:20:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:46.951 "name": "raid_bdev1", 00:43:46.951 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:46.951 "strip_size_kb": 64, 00:43:46.951 "state": "online", 00:43:46.951 "raid_level": "raid5f", 00:43:46.951 "superblock": false, 00:43:46.951 "num_base_bdevs": 3, 00:43:46.951 "num_base_bdevs_discovered": 3, 00:43:46.951 "num_base_bdevs_operational": 3, 00:43:46.951 "process": { 00:43:46.951 "type": "rebuild", 00:43:46.951 "target": "spare", 00:43:46.951 "progress": { 00:43:46.951 "blocks": 24576, 00:43:46.951 "percent": 18 00:43:46.951 } 00:43:46.951 }, 00:43:46.951 "base_bdevs_list": [ 00:43:46.951 { 00:43:46.951 "name": "spare", 00:43:46.951 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:46.951 "is_configured": true, 00:43:46.951 "data_offset": 0, 00:43:46.951 "data_size": 65536 00:43:46.951 }, 00:43:46.951 { 00:43:46.951 "name": "BaseBdev2", 00:43:46.951 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:46.951 "is_configured": true, 00:43:46.951 "data_offset": 0, 00:43:46.951 "data_size": 65536 00:43:46.951 }, 00:43:46.951 { 00:43:46.951 "name": "BaseBdev3", 00:43:46.951 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:46.951 "is_configured": true, 00:43:46.951 "data_offset": 0, 00:43:46.951 "data_size": 65536 00:43:46.951 } 00:43:46.951 ] 00:43:46.951 }' 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:46.951 16:20:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:43:47.209 [2024-07-22 16:20:51.416520] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:43:47.209 [2024-07-22 16:20:51.477968] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:43:47.209 [2024-07-22 16:20:51.478136] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:47.468 16:20:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:47.727 16:20:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:47.727 "name": "raid_bdev1", 00:43:47.727 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:47.727 "strip_size_kb": 64, 00:43:47.727 "state": "online", 00:43:47.727 "raid_level": "raid5f", 00:43:47.727 "superblock": false, 00:43:47.727 "num_base_bdevs": 3, 00:43:47.727 "num_base_bdevs_discovered": 2, 00:43:47.727 "num_base_bdevs_operational": 2, 00:43:47.727 "base_bdevs_list": [ 00:43:47.727 { 00:43:47.727 "name": null, 00:43:47.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:47.727 "is_configured": false, 00:43:47.727 "data_offset": 0, 00:43:47.727 "data_size": 65536 00:43:47.727 }, 00:43:47.727 { 00:43:47.727 "name": "BaseBdev2", 00:43:47.727 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:47.727 "is_configured": true, 00:43:47.727 "data_offset": 0, 00:43:47.727 "data_size": 65536 00:43:47.727 }, 00:43:47.727 { 00:43:47.727 "name": "BaseBdev3", 00:43:47.727 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:47.727 "is_configured": true, 00:43:47.727 "data_offset": 0, 00:43:47.727 "data_size": 65536 00:43:47.727 } 00:43:47.727 ] 00:43:47.727 }' 00:43:47.727 16:20:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:47.727 16:20:51 -- common/autotest_common.sh@10 -- # set +x 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:47.986 16:20:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:48.246 "name": "raid_bdev1", 00:43:48.246 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:48.246 "strip_size_kb": 64, 00:43:48.246 "state": "online", 00:43:48.246 "raid_level": "raid5f", 00:43:48.246 "superblock": false, 00:43:48.246 "num_base_bdevs": 3, 00:43:48.246 "num_base_bdevs_discovered": 2, 00:43:48.246 "num_base_bdevs_operational": 2, 00:43:48.246 "base_bdevs_list": [ 00:43:48.246 { 00:43:48.246 "name": null, 00:43:48.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:43:48.246 "is_configured": false, 00:43:48.246 "data_offset": 0, 00:43:48.246 "data_size": 65536 00:43:48.246 }, 00:43:48.246 { 00:43:48.246 "name": "BaseBdev2", 00:43:48.246 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:48.246 "is_configured": true, 00:43:48.246 "data_offset": 0, 00:43:48.246 "data_size": 65536 00:43:48.246 }, 00:43:48.246 { 00:43:48.246 "name": "BaseBdev3", 00:43:48.246 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:48.246 "is_configured": true, 00:43:48.246 "data_offset": 0, 00:43:48.246 "data_size": 65536 00:43:48.246 } 00:43:48.246 ] 00:43:48.246 }' 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:48.246 16:20:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:43:48.505 [2024-07-22 16:20:52.768301] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:43:48.505 [2024-07-22 16:20:52.768424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:43:48.763 [2024-07-22 16:20:52.783193] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:43:48.763 [2024-07-22 16:20:52.791214] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:43:48.763 16:20:52 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:49.698 16:20:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:49.957 "name": "raid_bdev1", 00:43:49.957 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:49.957 "strip_size_kb": 64, 00:43:49.957 "state": "online", 00:43:49.957 "raid_level": "raid5f", 00:43:49.957 "superblock": false, 00:43:49.957 "num_base_bdevs": 3, 00:43:49.957 "num_base_bdevs_discovered": 3, 00:43:49.957 "num_base_bdevs_operational": 3, 00:43:49.957 "process": { 00:43:49.957 "type": "rebuild", 00:43:49.957 "target": "spare", 00:43:49.957 "progress": { 00:43:49.957 "blocks": 24576, 00:43:49.957 "percent": 18 00:43:49.957 } 00:43:49.957 }, 00:43:49.957 "base_bdevs_list": [ 00:43:49.957 { 00:43:49.957 "name": "spare", 00:43:49.957 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:49.957 "is_configured": true, 00:43:49.957 "data_offset": 0, 00:43:49.957 "data_size": 65536 00:43:49.957 }, 00:43:49.957 { 00:43:49.957 "name": "BaseBdev2", 00:43:49.957 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:49.957 "is_configured": true, 00:43:49.957 "data_offset": 0, 00:43:49.957 "data_size": 65536 00:43:49.957 }, 00:43:49.957 { 00:43:49.957 "name": "BaseBdev3", 00:43:49.957 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:49.957 "is_configured": true, 00:43:49.957 "data_offset": 0, 00:43:49.957 "data_size": 65536 00:43:49.957 } 00:43:49.957 ] 00:43:49.957 }' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@657 -- # local timeout=619 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:49.957 16:20:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:50.215 "name": "raid_bdev1", 00:43:50.215 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:50.215 "strip_size_kb": 64, 00:43:50.215 "state": "online", 00:43:50.215 "raid_level": "raid5f", 00:43:50.215 "superblock": false, 00:43:50.215 "num_base_bdevs": 3, 00:43:50.215 "num_base_bdevs_discovered": 3, 00:43:50.215 "num_base_bdevs_operational": 3, 00:43:50.215 "process": { 00:43:50.215 "type": "rebuild", 00:43:50.215 "target": "spare", 00:43:50.215 "progress": { 00:43:50.215 "blocks": 32768, 00:43:50.215 "percent": 25 00:43:50.215 } 00:43:50.215 }, 00:43:50.215 "base_bdevs_list": [ 00:43:50.215 { 00:43:50.215 "name": "spare", 00:43:50.215 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:50.215 "is_configured": true, 00:43:50.215 "data_offset": 0, 00:43:50.215 "data_size": 65536 00:43:50.215 }, 00:43:50.215 { 00:43:50.215 "name": "BaseBdev2", 00:43:50.215 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:50.215 "is_configured": true, 00:43:50.215 "data_offset": 0, 00:43:50.215 "data_size": 65536 00:43:50.215 }, 00:43:50.215 { 00:43:50.215 "name": "BaseBdev3", 00:43:50.215 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:50.215 "is_configured": true, 00:43:50.215 "data_offset": 0, 00:43:50.215 "data_size": 65536 00:43:50.215 } 00:43:50.215 ] 00:43:50.215 }' 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:50.215 16:20:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:51.591 "name": "raid_bdev1", 00:43:51.591 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:51.591 "strip_size_kb": 64, 00:43:51.591 "state": "online", 00:43:51.591 "raid_level": "raid5f", 00:43:51.591 "superblock": false, 00:43:51.591 "num_base_bdevs": 3, 00:43:51.591 "num_base_bdevs_discovered": 3, 00:43:51.591 "num_base_bdevs_operational": 3, 00:43:51.591 "process": { 00:43:51.591 "type": "rebuild", 00:43:51.591 "target": "spare", 00:43:51.591 "progress": { 00:43:51.591 "blocks": 57344, 00:43:51.591 "percent": 43 00:43:51.591 } 00:43:51.591 }, 00:43:51.591 "base_bdevs_list": [ 00:43:51.591 { 00:43:51.591 "name": "spare", 00:43:51.591 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:51.591 "is_configured": true, 00:43:51.591 "data_offset": 0, 00:43:51.591 "data_size": 65536 00:43:51.591 }, 00:43:51.591 { 00:43:51.591 "name": "BaseBdev2", 00:43:51.591 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:51.591 "is_configured": true, 00:43:51.591 "data_offset": 0, 00:43:51.591 "data_size": 65536 00:43:51.591 }, 00:43:51.591 { 00:43:51.591 "name": "BaseBdev3", 00:43:51.591 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:51.591 "is_configured": true, 00:43:51.591 "data_offset": 0, 00:43:51.591 "data_size": 65536 00:43:51.591 } 00:43:51.591 ] 00:43:51.591 }' 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:51.591 16:20:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:52.525 16:20:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:52.783 16:20:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:52.783 "name": "raid_bdev1", 00:43:52.783 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:52.783 "strip_size_kb": 64, 00:43:52.783 "state": "online", 00:43:52.783 "raid_level": "raid5f", 00:43:52.783 "superblock": false, 00:43:52.783 "num_base_bdevs": 3, 00:43:52.783 "num_base_bdevs_discovered": 3, 00:43:52.783 "num_base_bdevs_operational": 3, 00:43:52.783 "process": { 00:43:52.783 "type": "rebuild", 00:43:52.783 "target": "spare", 00:43:52.783 "progress": { 00:43:52.783 "blocks": 83968, 00:43:52.783 "percent": 64 00:43:52.783 } 00:43:52.783 }, 00:43:52.783 "base_bdevs_list": [ 00:43:52.783 { 00:43:52.783 "name": "spare", 00:43:52.783 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:52.783 "is_configured": true, 00:43:52.783 "data_offset": 0, 00:43:52.783 "data_size": 65536 00:43:52.783 }, 00:43:52.783 { 00:43:52.783 "name": "BaseBdev2", 00:43:52.783 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:52.783 "is_configured": true, 00:43:52.783 "data_offset": 0, 00:43:52.783 "data_size": 65536 00:43:52.783 }, 00:43:52.783 { 00:43:52.783 "name": "BaseBdev3", 00:43:52.783 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:52.783 "is_configured": true, 00:43:52.783 "data_offset": 0, 00:43:52.783 "data_size": 65536 00:43:52.783 } 00:43:52.783 ] 00:43:52.783 }' 00:43:52.783 16:20:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:52.783 16:20:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:52.783 16:20:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:53.041 16:20:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:53.041 16:20:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:53.975 16:20:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:54.234 "name": "raid_bdev1", 00:43:54.234 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:54.234 "strip_size_kb": 64, 00:43:54.234 "state": "online", 00:43:54.234 "raid_level": "raid5f", 00:43:54.234 "superblock": false, 00:43:54.234 "num_base_bdevs": 3, 00:43:54.234 "num_base_bdevs_discovered": 3, 00:43:54.234 "num_base_bdevs_operational": 3, 00:43:54.234 "process": { 00:43:54.234 "type": "rebuild", 00:43:54.234 "target": "spare", 00:43:54.234 "progress": { 00:43:54.234 "blocks": 110592, 00:43:54.234 "percent": 84 00:43:54.234 } 00:43:54.234 }, 00:43:54.234 "base_bdevs_list": [ 00:43:54.234 { 00:43:54.234 "name": "spare", 00:43:54.234 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:54.234 "is_configured": true, 00:43:54.234 "data_offset": 0, 00:43:54.234 "data_size": 65536 00:43:54.234 }, 00:43:54.234 { 00:43:54.234 "name": "BaseBdev2", 00:43:54.234 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:54.234 "is_configured": true, 00:43:54.234 "data_offset": 0, 00:43:54.234 "data_size": 65536 00:43:54.234 }, 00:43:54.234 { 00:43:54.234 "name": "BaseBdev3", 00:43:54.234 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:54.234 "is_configured": true, 00:43:54.234 "data_offset": 0, 00:43:54.234 "data_size": 65536 00:43:54.234 } 00:43:54.234 ] 00:43:54.234 }' 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:43:54.234 16:20:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:43:55.183 [2024-07-22 16:20:59.274760] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:43:55.183 [2024-07-22 16:20:59.274906] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:43:55.183 [2024-07-22 16:20:59.274979] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:55.183 16:20:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:55.442 "name": "raid_bdev1", 00:43:55.442 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:55.442 "strip_size_kb": 64, 00:43:55.442 "state": "online", 00:43:55.442 "raid_level": "raid5f", 00:43:55.442 "superblock": false, 00:43:55.442 "num_base_bdevs": 3, 00:43:55.442 "num_base_bdevs_discovered": 3, 00:43:55.442 "num_base_bdevs_operational": 3, 00:43:55.442 "base_bdevs_list": [ 00:43:55.442 { 00:43:55.442 "name": "spare", 00:43:55.442 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:55.442 "is_configured": true, 00:43:55.442 "data_offset": 0, 00:43:55.442 "data_size": 65536 00:43:55.442 }, 00:43:55.442 { 00:43:55.442 "name": "BaseBdev2", 00:43:55.442 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:55.442 "is_configured": true, 00:43:55.442 "data_offset": 0, 00:43:55.442 "data_size": 65536 00:43:55.442 }, 00:43:55.442 { 00:43:55.442 "name": "BaseBdev3", 00:43:55.442 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:55.442 "is_configured": true, 00:43:55.442 "data_offset": 0, 00:43:55.442 "data_size": 65536 00:43:55.442 } 00:43:55.442 ] 00:43:55.442 }' 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@660 -- # break 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:55.442 16:20:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:56.008 16:20:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:43:56.008 "name": "raid_bdev1", 00:43:56.008 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:56.008 "strip_size_kb": 64, 00:43:56.008 "state": "online", 00:43:56.008 "raid_level": "raid5f", 00:43:56.009 "superblock": false, 00:43:56.009 "num_base_bdevs": 3, 00:43:56.009 "num_base_bdevs_discovered": 3, 00:43:56.009 "num_base_bdevs_operational": 3, 00:43:56.009 "base_bdevs_list": [ 00:43:56.009 { 00:43:56.009 "name": "spare", 00:43:56.009 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:56.009 "is_configured": true, 00:43:56.009 "data_offset": 0, 00:43:56.009 "data_size": 65536 00:43:56.009 }, 00:43:56.009 { 00:43:56.009 "name": "BaseBdev2", 00:43:56.009 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:56.009 "is_configured": true, 00:43:56.009 "data_offset": 0, 00:43:56.009 "data_size": 65536 00:43:56.009 }, 00:43:56.009 { 00:43:56.009 "name": "BaseBdev3", 00:43:56.009 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:56.009 "is_configured": true, 00:43:56.009 "data_offset": 0, 00:43:56.009 "data_size": 65536 00:43:56.009 } 00:43:56.009 ] 00:43:56.009 }' 00:43:56.009 16:20:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:43:56.009 16:20:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:43:56.009 16:20:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:43:56.009 16:20:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:56.009 16:21:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:43:56.267 16:21:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:43:56.267 "name": "raid_bdev1", 00:43:56.267 "uuid": "5a31eec0-2b25-4bab-b831-d968c748ea98", 00:43:56.267 "strip_size_kb": 64, 00:43:56.267 "state": "online", 00:43:56.267 "raid_level": "raid5f", 00:43:56.267 "superblock": false, 00:43:56.267 "num_base_bdevs": 3, 00:43:56.267 "num_base_bdevs_discovered": 3, 00:43:56.267 "num_base_bdevs_operational": 3, 00:43:56.267 "base_bdevs_list": [ 00:43:56.267 { 00:43:56.267 "name": "spare", 00:43:56.267 "uuid": "e3367503-6cc3-5c6a-85ae-ae4520844775", 00:43:56.267 "is_configured": true, 00:43:56.267 "data_offset": 0, 00:43:56.267 "data_size": 65536 00:43:56.267 }, 00:43:56.267 { 00:43:56.267 "name": "BaseBdev2", 00:43:56.267 "uuid": "5b1f5038-2c2f-4fa9-9ef1-a5548c582979", 00:43:56.267 "is_configured": true, 00:43:56.267 "data_offset": 0, 00:43:56.267 "data_size": 65536 00:43:56.267 }, 00:43:56.267 { 00:43:56.267 "name": "BaseBdev3", 00:43:56.267 "uuid": "fa1f626c-a814-464d-8c8b-90f3d04d35d5", 00:43:56.267 "is_configured": true, 00:43:56.267 "data_offset": 0, 00:43:56.267 "data_size": 65536 00:43:56.267 } 00:43:56.267 ] 00:43:56.267 }' 00:43:56.267 16:21:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:43:56.267 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:43:56.525 16:21:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:43:56.783 [2024-07-22 16:21:00.975383] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:43:56.783 [2024-07-22 16:21:00.975439] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:43:56.783 [2024-07-22 16:21:00.975590] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:43:56.783 [2024-07-22 16:21:00.975725] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:43:56.783 [2024-07-22 16:21:00.975756] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:43:56.783 16:21:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:43:56.783 16:21:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:43:57.350 16:21:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:43:57.350 16:21:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:43:57.350 16:21:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@12 -- # local i 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:57.350 16:21:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:43:57.608 /dev/nbd0 00:43:57.608 16:21:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:57.608 16:21:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:57.608 16:21:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:43:57.608 16:21:01 -- common/autotest_common.sh@857 -- # local i 00:43:57.608 16:21:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:43:57.608 16:21:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:43:57.608 16:21:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:43:57.608 16:21:01 -- common/autotest_common.sh@861 -- # break 00:43:57.608 16:21:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:43:57.608 16:21:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:43:57.608 16:21:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:57.608 1+0 records in 00:43:57.608 1+0 records out 00:43:57.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255544 s, 16.0 MB/s 00:43:57.608 16:21:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.608 16:21:01 -- common/autotest_common.sh@874 -- # size=4096 00:43:57.608 16:21:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.608 16:21:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:43:57.608 16:21:01 -- common/autotest_common.sh@877 -- # return 0 00:43:57.608 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:57.608 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:57.608 16:21:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:43:57.867 /dev/nbd1 00:43:57.867 16:21:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:57.867 16:21:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:57.867 16:21:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:43:57.867 16:21:01 -- common/autotest_common.sh@857 -- # local i 00:43:57.867 16:21:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:43:57.867 16:21:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:43:57.867 16:21:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:43:57.867 16:21:01 -- common/autotest_common.sh@861 -- # break 00:43:57.867 16:21:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:43:57.867 16:21:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:43:57.867 16:21:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:57.867 1+0 records in 00:43:57.867 1+0 records out 00:43:57.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453248 s, 9.0 MB/s 00:43:57.867 16:21:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.867 16:21:01 -- common/autotest_common.sh@874 -- # size=4096 00:43:57.867 16:21:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:57.867 16:21:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:43:57.867 16:21:01 -- common/autotest_common.sh@877 -- # return 0 00:43:57.867 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:57.867 16:21:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:43:57.867 16:21:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:43:58.125 16:21:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@51 -- # local i 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.125 16:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@41 -- # break 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:58.383 16:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@41 -- # break 00:43:58.641 16:21:02 -- bdev/nbd_common.sh@45 -- # return 0 00:43:58.641 16:21:02 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:43:58.641 16:21:02 -- bdev/bdev_raid.sh@709 -- # killprocess 85072 00:43:58.641 16:21:02 -- common/autotest_common.sh@926 -- # '[' -z 85072 ']' 00:43:58.641 16:21:02 -- common/autotest_common.sh@930 -- # kill -0 85072 00:43:58.641 16:21:02 -- common/autotest_common.sh@931 -- # uname 00:43:58.641 16:21:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:43:58.641 16:21:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85072 00:43:58.641 killing process with pid 85072 00:43:58.641 Received shutdown signal, test time was about 60.000000 seconds 00:43:58.641 00:43:58.641 Latency(us) 00:43:58.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:58.642 =================================================================================================================== 00:43:58.642 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:58.642 16:21:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:43:58.642 16:21:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:43:58.642 16:21:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85072' 00:43:58.642 16:21:02 -- common/autotest_common.sh@945 -- # kill 85072 00:43:58.642 16:21:02 -- common/autotest_common.sh@950 -- # wait 85072 00:43:58.642 [2024-07-22 16:21:02.877451] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:43:59.205 [2024-07-22 16:21:03.263879] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:00.581 ************************************ 00:44:00.581 END TEST raid5f_rebuild_test 00:44:00.581 ************************************ 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:44:00.581 00:44:00.581 real 0m21.214s 00:44:00.581 user 0m29.972s 00:44:00.581 sys 0m3.283s 00:44:00.581 16:21:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:00.581 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:44:00.581 16:21:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:44:00.581 16:21:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:00.581 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:44:00.581 ************************************ 00:44:00.581 START TEST raid5f_rebuild_test_sb 00:44:00.581 ************************************ 00:44:00.581 16:21:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=85586 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:44:00.581 16:21:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 85586 /var/tmp/spdk-raid.sock 00:44:00.581 16:21:04 -- common/autotest_common.sh@819 -- # '[' -z 85586 ']' 00:44:00.581 16:21:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:44:00.581 16:21:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:44:00.581 16:21:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:44:00.581 16:21:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:00.581 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:44:00.581 I/O size of 3145728 is greater than zero copy threshold (65536). 00:44:00.581 Zero copy mechanism will not be used. 00:44:00.581 [2024-07-22 16:21:04.849669] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:44:00.581 [2024-07-22 16:21:04.849868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85586 ] 00:44:00.840 [2024-07-22 16:21:05.021366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:01.099 [2024-07-22 16:21:05.319897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:01.357 [2024-07-22 16:21:05.564001] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:01.923 16:21:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:01.923 16:21:05 -- common/autotest_common.sh@852 -- # return 0 00:44:01.923 16:21:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:44:01.923 16:21:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:44:01.923 16:21:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:44:02.181 BaseBdev1_malloc 00:44:02.181 16:21:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:44:02.440 [2024-07-22 16:21:06.505497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:44:02.440 [2024-07-22 16:21:06.505668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:02.440 [2024-07-22 16:21:06.505714] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:44:02.440 [2024-07-22 16:21:06.505735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:02.440 [2024-07-22 16:21:06.509488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:02.440 BaseBdev1 00:44:02.440 [2024-07-22 16:21:06.509744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:44:02.440 16:21:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:44:02.440 16:21:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:44:02.440 16:21:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:44:02.701 BaseBdev2_malloc 00:44:02.701 16:21:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:44:02.990 [2024-07-22 16:21:07.178972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:44:02.990 [2024-07-22 16:21:07.179129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:02.990 [2024-07-22 16:21:07.179183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:44:02.990 [2024-07-22 16:21:07.179209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:02.990 [2024-07-22 16:21:07.182192] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:02.990 [2024-07-22 16:21:07.182242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:44:02.990 BaseBdev2 00:44:02.990 16:21:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:44:02.990 16:21:07 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:44:02.990 16:21:07 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:44:03.248 BaseBdev3_malloc 00:44:03.507 16:21:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:44:03.507 [2024-07-22 16:21:07.761820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:44:03.507 [2024-07-22 16:21:07.761975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:03.507 [2024-07-22 16:21:07.762014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:44:03.507 [2024-07-22 16:21:07.762071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:03.507 [2024-07-22 16:21:07.765228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:03.507 [2024-07-22 16:21:07.765277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:44:03.507 BaseBdev3 00:44:03.765 16:21:07 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:44:03.765 spare_malloc 00:44:04.023 16:21:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:44:04.023 spare_delay 00:44:04.023 16:21:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:44:04.590 [2024-07-22 16:21:08.568419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:04.590 [2024-07-22 16:21:08.568596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:04.590 [2024-07-22 16:21:08.568636] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:44:04.590 [2024-07-22 16:21:08.568656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:04.590 [2024-07-22 16:21:08.571548] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:04.590 [2024-07-22 16:21:08.571596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:04.590 spare 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:44:04.590 [2024-07-22 16:21:08.812640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:04.590 [2024-07-22 16:21:08.815226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:04.590 [2024-07-22 16:21:08.815459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:04.590 [2024-07-22 16:21:08.815893] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:44:04.590 [2024-07-22 16:21:08.816094] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:44:04.590 [2024-07-22 16:21:08.816458] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:44:04.590 [2024-07-22 16:21:08.822789] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:44:04.590 [2024-07-22 16:21:08.822945] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:44:04.590 [2024-07-22 16:21:08.823381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:04.590 16:21:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:04.848 16:21:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:04.848 "name": "raid_bdev1", 00:44:04.848 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:04.848 "strip_size_kb": 64, 00:44:04.848 "state": "online", 00:44:04.848 "raid_level": "raid5f", 00:44:04.848 "superblock": true, 00:44:04.848 "num_base_bdevs": 3, 00:44:04.848 "num_base_bdevs_discovered": 3, 00:44:04.848 "num_base_bdevs_operational": 3, 00:44:04.848 "base_bdevs_list": [ 00:44:04.848 { 00:44:04.848 "name": "BaseBdev1", 00:44:04.848 "uuid": "f4cef22d-7e9a-5a8c-8e1a-2be6de9787b5", 00:44:04.848 "is_configured": true, 00:44:04.848 "data_offset": 2048, 00:44:04.848 "data_size": 63488 00:44:04.848 }, 00:44:04.848 { 00:44:04.848 "name": "BaseBdev2", 00:44:04.848 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:04.848 "is_configured": true, 00:44:04.848 "data_offset": 2048, 00:44:04.848 "data_size": 63488 00:44:04.848 }, 00:44:04.848 { 00:44:04.848 "name": "BaseBdev3", 00:44:04.848 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:04.848 "is_configured": true, 00:44:04.848 "data_offset": 2048, 00:44:04.848 "data_size": 63488 00:44:04.848 } 00:44:04.848 ] 00:44:04.848 }' 00:44:04.848 16:21:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:04.848 16:21:09 -- common/autotest_common.sh@10 -- # set +x 00:44:05.416 16:21:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:44:05.416 16:21:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:44:05.416 [2024-07-22 16:21:09.678388] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:05.686 16:21:09 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:44:05.686 16:21:09 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:05.686 16:21:09 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:44:05.967 16:21:09 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:44:05.967 16:21:09 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:44:05.967 16:21:09 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:44:05.967 16:21:09 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@12 -- # local i 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:05.967 16:21:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:44:06.225 [2024-07-22 16:21:10.242379] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:44:06.225 /dev/nbd0 00:44:06.225 16:21:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:06.225 16:21:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:06.225 16:21:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:44:06.225 16:21:10 -- common/autotest_common.sh@857 -- # local i 00:44:06.225 16:21:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:44:06.225 16:21:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:44:06.225 16:21:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:44:06.225 16:21:10 -- common/autotest_common.sh@861 -- # break 00:44:06.225 16:21:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:44:06.225 16:21:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:44:06.225 16:21:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:06.225 1+0 records in 00:44:06.225 1+0 records out 00:44:06.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339903 s, 12.1 MB/s 00:44:06.225 16:21:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.225 16:21:10 -- common/autotest_common.sh@874 -- # size=4096 00:44:06.225 16:21:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:06.225 16:21:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:44:06.225 16:21:10 -- common/autotest_common.sh@877 -- # return 0 00:44:06.225 16:21:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:06.225 16:21:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:44:06.225 16:21:10 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:44:06.225 16:21:10 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:44:06.225 16:21:10 -- bdev/bdev_raid.sh@582 -- # echo 128 00:44:06.225 16:21:10 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:44:06.798 496+0 records in 00:44:06.798 496+0 records out 00:44:06.799 65011712 bytes (65 MB, 62 MiB) copied, 0.547877 s, 119 MB/s 00:44:06.799 16:21:10 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@51 -- # local i 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:06.799 16:21:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:44:06.799 [2024-07-22 16:21:11.063738] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:07.060 16:21:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@41 -- # break 00:44:07.061 16:21:11 -- bdev/nbd_common.sh@45 -- # return 0 00:44:07.061 16:21:11 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:44:07.319 [2024-07-22 16:21:11.342493] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:07.319 16:21:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:07.577 16:21:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:07.577 "name": "raid_bdev1", 00:44:07.577 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:07.577 "strip_size_kb": 64, 00:44:07.577 "state": "online", 00:44:07.577 "raid_level": "raid5f", 00:44:07.577 "superblock": true, 00:44:07.577 "num_base_bdevs": 3, 00:44:07.577 "num_base_bdevs_discovered": 2, 00:44:07.577 "num_base_bdevs_operational": 2, 00:44:07.577 "base_bdevs_list": [ 00:44:07.577 { 00:44:07.577 "name": null, 00:44:07.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:07.577 "is_configured": false, 00:44:07.577 "data_offset": 2048, 00:44:07.577 "data_size": 63488 00:44:07.578 }, 00:44:07.578 { 00:44:07.578 "name": "BaseBdev2", 00:44:07.578 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:07.578 "is_configured": true, 00:44:07.578 "data_offset": 2048, 00:44:07.578 "data_size": 63488 00:44:07.578 }, 00:44:07.578 { 00:44:07.578 "name": "BaseBdev3", 00:44:07.578 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:07.578 "is_configured": true, 00:44:07.578 "data_offset": 2048, 00:44:07.578 "data_size": 63488 00:44:07.578 } 00:44:07.578 ] 00:44:07.578 }' 00:44:07.578 16:21:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:07.578 16:21:11 -- common/autotest_common.sh@10 -- # set +x 00:44:07.836 16:21:12 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:44:08.095 [2024-07-22 16:21:12.310784] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:44:08.095 [2024-07-22 16:21:12.310878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:08.095 [2024-07-22 16:21:12.329274] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028830 00:44:08.095 [2024-07-22 16:21:12.338779] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:08.095 16:21:12 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:09.487 "name": "raid_bdev1", 00:44:09.487 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:09.487 "strip_size_kb": 64, 00:44:09.487 "state": "online", 00:44:09.487 "raid_level": "raid5f", 00:44:09.487 "superblock": true, 00:44:09.487 "num_base_bdevs": 3, 00:44:09.487 "num_base_bdevs_discovered": 3, 00:44:09.487 "num_base_bdevs_operational": 3, 00:44:09.487 "process": { 00:44:09.487 "type": "rebuild", 00:44:09.487 "target": "spare", 00:44:09.487 "progress": { 00:44:09.487 "blocks": 24576, 00:44:09.487 "percent": 19 00:44:09.487 } 00:44:09.487 }, 00:44:09.487 "base_bdevs_list": [ 00:44:09.487 { 00:44:09.487 "name": "spare", 00:44:09.487 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:09.487 "is_configured": true, 00:44:09.487 "data_offset": 2048, 00:44:09.487 "data_size": 63488 00:44:09.487 }, 00:44:09.487 { 00:44:09.487 "name": "BaseBdev2", 00:44:09.487 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:09.487 "is_configured": true, 00:44:09.487 "data_offset": 2048, 00:44:09.487 "data_size": 63488 00:44:09.487 }, 00:44:09.487 { 00:44:09.487 "name": "BaseBdev3", 00:44:09.487 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:09.487 "is_configured": true, 00:44:09.487 "data_offset": 2048, 00:44:09.487 "data_size": 63488 00:44:09.487 } 00:44:09.487 ] 00:44:09.487 }' 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:09.487 16:21:13 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:44:09.746 [2024-07-22 16:21:13.906310] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:44:09.746 [2024-07-22 16:21:13.961147] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:44:09.746 [2024-07-22 16:21:13.961279] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:09.746 16:21:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:10.004 16:21:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:10.004 "name": "raid_bdev1", 00:44:10.004 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:10.004 "strip_size_kb": 64, 00:44:10.004 "state": "online", 00:44:10.004 "raid_level": "raid5f", 00:44:10.004 "superblock": true, 00:44:10.004 "num_base_bdevs": 3, 00:44:10.004 "num_base_bdevs_discovered": 2, 00:44:10.004 "num_base_bdevs_operational": 2, 00:44:10.004 "base_bdevs_list": [ 00:44:10.004 { 00:44:10.004 "name": null, 00:44:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:10.004 "is_configured": false, 00:44:10.004 "data_offset": 2048, 00:44:10.004 "data_size": 63488 00:44:10.004 }, 00:44:10.004 { 00:44:10.004 "name": "BaseBdev2", 00:44:10.004 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:10.004 "is_configured": true, 00:44:10.004 "data_offset": 2048, 00:44:10.004 "data_size": 63488 00:44:10.004 }, 00:44:10.004 { 00:44:10.004 "name": "BaseBdev3", 00:44:10.004 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:10.004 "is_configured": true, 00:44:10.004 "data_offset": 2048, 00:44:10.004 "data_size": 63488 00:44:10.004 } 00:44:10.004 ] 00:44:10.004 }' 00:44:10.004 16:21:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:10.004 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:44:10.262 16:21:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:10.262 16:21:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:10.262 16:21:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:44:10.262 16:21:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:44:10.262 16:21:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:10.519 "name": "raid_bdev1", 00:44:10.519 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:10.519 "strip_size_kb": 64, 00:44:10.519 "state": "online", 00:44:10.519 "raid_level": "raid5f", 00:44:10.519 "superblock": true, 00:44:10.519 "num_base_bdevs": 3, 00:44:10.519 "num_base_bdevs_discovered": 2, 00:44:10.519 "num_base_bdevs_operational": 2, 00:44:10.519 "base_bdevs_list": [ 00:44:10.519 { 00:44:10.519 "name": null, 00:44:10.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:10.519 "is_configured": false, 00:44:10.519 "data_offset": 2048, 00:44:10.519 "data_size": 63488 00:44:10.519 }, 00:44:10.519 { 00:44:10.519 "name": "BaseBdev2", 00:44:10.519 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:10.519 "is_configured": true, 00:44:10.519 "data_offset": 2048, 00:44:10.519 "data_size": 63488 00:44:10.519 }, 00:44:10.519 { 00:44:10.519 "name": "BaseBdev3", 00:44:10.519 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:10.519 "is_configured": true, 00:44:10.519 "data_offset": 2048, 00:44:10.519 "data_size": 63488 00:44:10.519 } 00:44:10.519 ] 00:44:10.519 }' 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:44:10.519 16:21:14 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:44:11.082 [2024-07-22 16:21:15.071679] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:44:11.082 [2024-07-22 16:21:15.071761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:11.082 [2024-07-22 16:21:15.085860] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028900 00:44:11.082 [2024-07-22 16:21:15.093597] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:44:11.082 16:21:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:12.012 16:21:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:12.271 "name": "raid_bdev1", 00:44:12.271 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:12.271 "strip_size_kb": 64, 00:44:12.271 "state": "online", 00:44:12.271 "raid_level": "raid5f", 00:44:12.271 "superblock": true, 00:44:12.271 "num_base_bdevs": 3, 00:44:12.271 "num_base_bdevs_discovered": 3, 00:44:12.271 "num_base_bdevs_operational": 3, 00:44:12.271 "process": { 00:44:12.271 "type": "rebuild", 00:44:12.271 "target": "spare", 00:44:12.271 "progress": { 00:44:12.271 "blocks": 24576, 00:44:12.271 "percent": 19 00:44:12.271 } 00:44:12.271 }, 00:44:12.271 "base_bdevs_list": [ 00:44:12.271 { 00:44:12.271 "name": "spare", 00:44:12.271 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:12.271 "is_configured": true, 00:44:12.271 "data_offset": 2048, 00:44:12.271 "data_size": 63488 00:44:12.271 }, 00:44:12.271 { 00:44:12.271 "name": "BaseBdev2", 00:44:12.271 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:12.271 "is_configured": true, 00:44:12.271 "data_offset": 2048, 00:44:12.271 "data_size": 63488 00:44:12.271 }, 00:44:12.271 { 00:44:12.271 "name": "BaseBdev3", 00:44:12.271 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:12.271 "is_configured": true, 00:44:12.271 "data_offset": 2048, 00:44:12.271 "data_size": 63488 00:44:12.271 } 00:44:12.271 ] 00:44:12.271 }' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:44:12.271 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@657 -- # local timeout=641 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:12.271 16:21:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:12.531 "name": "raid_bdev1", 00:44:12.531 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:12.531 "strip_size_kb": 64, 00:44:12.531 "state": "online", 00:44:12.531 "raid_level": "raid5f", 00:44:12.531 "superblock": true, 00:44:12.531 "num_base_bdevs": 3, 00:44:12.531 "num_base_bdevs_discovered": 3, 00:44:12.531 "num_base_bdevs_operational": 3, 00:44:12.531 "process": { 00:44:12.531 "type": "rebuild", 00:44:12.531 "target": "spare", 00:44:12.531 "progress": { 00:44:12.531 "blocks": 30720, 00:44:12.531 "percent": 24 00:44:12.531 } 00:44:12.531 }, 00:44:12.531 "base_bdevs_list": [ 00:44:12.531 { 00:44:12.531 "name": "spare", 00:44:12.531 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:12.531 "is_configured": true, 00:44:12.531 "data_offset": 2048, 00:44:12.531 "data_size": 63488 00:44:12.531 }, 00:44:12.531 { 00:44:12.531 "name": "BaseBdev2", 00:44:12.531 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:12.531 "is_configured": true, 00:44:12.531 "data_offset": 2048, 00:44:12.531 "data_size": 63488 00:44:12.531 }, 00:44:12.531 { 00:44:12.531 "name": "BaseBdev3", 00:44:12.531 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:12.531 "is_configured": true, 00:44:12.531 "data_offset": 2048, 00:44:12.531 "data_size": 63488 00:44:12.531 } 00:44:12.531 ] 00:44:12.531 }' 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:12.531 16:21:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:44:13.465 16:21:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:44:13.465 16:21:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:13.465 16:21:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:13.465 16:21:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:13.466 16:21:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:13.466 16:21:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:13.723 16:21:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:13.723 16:21:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:13.980 16:21:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:13.980 "name": "raid_bdev1", 00:44:13.980 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:13.980 "strip_size_kb": 64, 00:44:13.980 "state": "online", 00:44:13.980 "raid_level": "raid5f", 00:44:13.980 "superblock": true, 00:44:13.980 "num_base_bdevs": 3, 00:44:13.980 "num_base_bdevs_discovered": 3, 00:44:13.980 "num_base_bdevs_operational": 3, 00:44:13.980 "process": { 00:44:13.980 "type": "rebuild", 00:44:13.980 "target": "spare", 00:44:13.980 "progress": { 00:44:13.980 "blocks": 57344, 00:44:13.980 "percent": 45 00:44:13.980 } 00:44:13.980 }, 00:44:13.980 "base_bdevs_list": [ 00:44:13.980 { 00:44:13.980 "name": "spare", 00:44:13.980 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:13.980 "is_configured": true, 00:44:13.980 "data_offset": 2048, 00:44:13.980 "data_size": 63488 00:44:13.980 }, 00:44:13.980 { 00:44:13.980 "name": "BaseBdev2", 00:44:13.981 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:13.981 "is_configured": true, 00:44:13.981 "data_offset": 2048, 00:44:13.981 "data_size": 63488 00:44:13.981 }, 00:44:13.981 { 00:44:13.981 "name": "BaseBdev3", 00:44:13.981 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:13.981 "is_configured": true, 00:44:13.981 "data_offset": 2048, 00:44:13.981 "data_size": 63488 00:44:13.981 } 00:44:13.981 ] 00:44:13.981 }' 00:44:13.981 16:21:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:13.981 16:21:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:13.981 16:21:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:13.981 16:21:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:13.981 16:21:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:14.914 16:21:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:15.171 "name": "raid_bdev1", 00:44:15.171 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:15.171 "strip_size_kb": 64, 00:44:15.171 "state": "online", 00:44:15.171 "raid_level": "raid5f", 00:44:15.171 "superblock": true, 00:44:15.171 "num_base_bdevs": 3, 00:44:15.171 "num_base_bdevs_discovered": 3, 00:44:15.171 "num_base_bdevs_operational": 3, 00:44:15.171 "process": { 00:44:15.171 "type": "rebuild", 00:44:15.171 "target": "spare", 00:44:15.171 "progress": { 00:44:15.171 "blocks": 83968, 00:44:15.171 "percent": 66 00:44:15.171 } 00:44:15.171 }, 00:44:15.171 "base_bdevs_list": [ 00:44:15.171 { 00:44:15.171 "name": "spare", 00:44:15.171 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:15.171 "is_configured": true, 00:44:15.171 "data_offset": 2048, 00:44:15.171 "data_size": 63488 00:44:15.171 }, 00:44:15.171 { 00:44:15.171 "name": "BaseBdev2", 00:44:15.171 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:15.171 "is_configured": true, 00:44:15.171 "data_offset": 2048, 00:44:15.171 "data_size": 63488 00:44:15.171 }, 00:44:15.171 { 00:44:15.171 "name": "BaseBdev3", 00:44:15.171 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:15.171 "is_configured": true, 00:44:15.171 "data_offset": 2048, 00:44:15.171 "data_size": 63488 00:44:15.171 } 00:44:15.171 ] 00:44:15.171 }' 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:15.171 16:21:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:16.105 16:21:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:16.364 16:21:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:16.364 "name": "raid_bdev1", 00:44:16.364 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:16.364 "strip_size_kb": 64, 00:44:16.364 "state": "online", 00:44:16.364 "raid_level": "raid5f", 00:44:16.364 "superblock": true, 00:44:16.364 "num_base_bdevs": 3, 00:44:16.364 "num_base_bdevs_discovered": 3, 00:44:16.364 "num_base_bdevs_operational": 3, 00:44:16.364 "process": { 00:44:16.364 "type": "rebuild", 00:44:16.364 "target": "spare", 00:44:16.364 "progress": { 00:44:16.364 "blocks": 110592, 00:44:16.364 "percent": 87 00:44:16.364 } 00:44:16.364 }, 00:44:16.364 "base_bdevs_list": [ 00:44:16.364 { 00:44:16.364 "name": "spare", 00:44:16.364 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:16.364 "is_configured": true, 00:44:16.364 "data_offset": 2048, 00:44:16.364 "data_size": 63488 00:44:16.364 }, 00:44:16.364 { 00:44:16.364 "name": "BaseBdev2", 00:44:16.364 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:16.364 "is_configured": true, 00:44:16.364 "data_offset": 2048, 00:44:16.364 "data_size": 63488 00:44:16.364 }, 00:44:16.364 { 00:44:16.364 "name": "BaseBdev3", 00:44:16.364 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:16.364 "is_configured": true, 00:44:16.364 "data_offset": 2048, 00:44:16.364 "data_size": 63488 00:44:16.364 } 00:44:16.364 ] 00:44:16.364 }' 00:44:16.364 16:21:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:16.621 16:21:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:44:16.621 16:21:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:16.621 16:21:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:44:16.621 16:21:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:44:17.187 [2024-07-22 16:21:21.377124] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:44:17.187 [2024-07-22 16:21:21.377323] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:44:17.187 [2024-07-22 16:21:21.377517] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:17.445 16:21:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:17.703 "name": "raid_bdev1", 00:44:17.703 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:17.703 "strip_size_kb": 64, 00:44:17.703 "state": "online", 00:44:17.703 "raid_level": "raid5f", 00:44:17.703 "superblock": true, 00:44:17.703 "num_base_bdevs": 3, 00:44:17.703 "num_base_bdevs_discovered": 3, 00:44:17.703 "num_base_bdevs_operational": 3, 00:44:17.703 "base_bdevs_list": [ 00:44:17.703 { 00:44:17.703 "name": "spare", 00:44:17.703 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:17.703 "is_configured": true, 00:44:17.703 "data_offset": 2048, 00:44:17.703 "data_size": 63488 00:44:17.703 }, 00:44:17.703 { 00:44:17.703 "name": "BaseBdev2", 00:44:17.703 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:17.703 "is_configured": true, 00:44:17.703 "data_offset": 2048, 00:44:17.703 "data_size": 63488 00:44:17.703 }, 00:44:17.703 { 00:44:17.703 "name": "BaseBdev3", 00:44:17.703 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:17.703 "is_configured": true, 00:44:17.703 "data_offset": 2048, 00:44:17.703 "data_size": 63488 00:44:17.703 } 00:44:17.703 ] 00:44:17.703 }' 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@660 -- # break 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:17.703 16:21:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:18.011 "name": "raid_bdev1", 00:44:18.011 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:18.011 "strip_size_kb": 64, 00:44:18.011 "state": "online", 00:44:18.011 "raid_level": "raid5f", 00:44:18.011 "superblock": true, 00:44:18.011 "num_base_bdevs": 3, 00:44:18.011 "num_base_bdevs_discovered": 3, 00:44:18.011 "num_base_bdevs_operational": 3, 00:44:18.011 "base_bdevs_list": [ 00:44:18.011 { 00:44:18.011 "name": "spare", 00:44:18.011 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:18.011 "is_configured": true, 00:44:18.011 "data_offset": 2048, 00:44:18.011 "data_size": 63488 00:44:18.011 }, 00:44:18.011 { 00:44:18.011 "name": "BaseBdev2", 00:44:18.011 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:18.011 "is_configured": true, 00:44:18.011 "data_offset": 2048, 00:44:18.011 "data_size": 63488 00:44:18.011 }, 00:44:18.011 { 00:44:18.011 "name": "BaseBdev3", 00:44:18.011 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:18.011 "is_configured": true, 00:44:18.011 "data_offset": 2048, 00:44:18.011 "data_size": 63488 00:44:18.011 } 00:44:18.011 ] 00:44:18.011 }' 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:18.011 16:21:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:18.269 16:21:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:18.269 "name": "raid_bdev1", 00:44:18.269 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:18.269 "strip_size_kb": 64, 00:44:18.269 "state": "online", 00:44:18.269 "raid_level": "raid5f", 00:44:18.269 "superblock": true, 00:44:18.269 "num_base_bdevs": 3, 00:44:18.269 "num_base_bdevs_discovered": 3, 00:44:18.269 "num_base_bdevs_operational": 3, 00:44:18.269 "base_bdevs_list": [ 00:44:18.269 { 00:44:18.269 "name": "spare", 00:44:18.269 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:18.269 "is_configured": true, 00:44:18.269 "data_offset": 2048, 00:44:18.269 "data_size": 63488 00:44:18.269 }, 00:44:18.269 { 00:44:18.269 "name": "BaseBdev2", 00:44:18.270 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:18.270 "is_configured": true, 00:44:18.270 "data_offset": 2048, 00:44:18.270 "data_size": 63488 00:44:18.270 }, 00:44:18.270 { 00:44:18.270 "name": "BaseBdev3", 00:44:18.270 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:18.270 "is_configured": true, 00:44:18.270 "data_offset": 2048, 00:44:18.270 "data_size": 63488 00:44:18.270 } 00:44:18.270 ] 00:44:18.270 }' 00:44:18.270 16:21:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:18.270 16:21:22 -- common/autotest_common.sh@10 -- # set +x 00:44:18.528 16:21:22 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:44:19.093 [2024-07-22 16:21:23.092774] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:19.093 [2024-07-22 16:21:23.092827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:19.093 [2024-07-22 16:21:23.092965] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:19.093 [2024-07-22 16:21:23.093125] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:19.093 [2024-07-22 16:21:23.093159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:44:19.093 16:21:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:19.093 16:21:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:44:19.351 16:21:23 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:44:19.351 16:21:23 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:44:19.351 16:21:23 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@12 -- # local i 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:19.351 16:21:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:44:19.610 /dev/nbd0 00:44:19.610 16:21:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:44:19.610 16:21:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:44:19.610 16:21:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:44:19.610 16:21:23 -- common/autotest_common.sh@857 -- # local i 00:44:19.610 16:21:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:44:19.610 16:21:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:44:19.610 16:21:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:44:19.610 16:21:23 -- common/autotest_common.sh@861 -- # break 00:44:19.610 16:21:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:44:19.610 16:21:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:44:19.610 16:21:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:19.610 1+0 records in 00:44:19.610 1+0 records out 00:44:19.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369247 s, 11.1 MB/s 00:44:19.610 16:21:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:19.610 16:21:23 -- common/autotest_common.sh@874 -- # size=4096 00:44:19.610 16:21:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:19.610 16:21:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:44:19.610 16:21:23 -- common/autotest_common.sh@877 -- # return 0 00:44:19.610 16:21:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:19.610 16:21:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:19.610 16:21:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:44:19.869 /dev/nbd1 00:44:19.869 16:21:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:44:19.869 16:21:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:44:19.869 16:21:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:44:19.869 16:21:24 -- common/autotest_common.sh@857 -- # local i 00:44:19.869 16:21:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:44:19.869 16:21:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:44:19.869 16:21:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:44:19.869 16:21:24 -- common/autotest_common.sh@861 -- # break 00:44:19.869 16:21:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:44:19.869 16:21:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:44:19.869 16:21:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:44:19.869 1+0 records in 00:44:19.869 1+0 records out 00:44:19.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420852 s, 9.7 MB/s 00:44:19.869 16:21:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:19.869 16:21:24 -- common/autotest_common.sh@874 -- # size=4096 00:44:19.869 16:21:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:44:19.869 16:21:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:44:19.869 16:21:24 -- common/autotest_common.sh@877 -- # return 0 00:44:19.869 16:21:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:44:19.869 16:21:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:44:19.869 16:21:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:44:20.127 16:21:24 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@51 -- # local i 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:20.127 16:21:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@41 -- # break 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@45 -- # return 0 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:44:20.385 16:21:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@41 -- # break 00:44:20.643 16:21:24 -- bdev/nbd_common.sh@45 -- # return 0 00:44:20.643 16:21:24 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:44:20.643 16:21:24 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:44:20.643 16:21:24 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:44:20.643 16:21:24 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:44:20.909 16:21:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:44:21.167 [2024-07-22 16:21:25.340818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:44:21.167 [2024-07-22 16:21:25.340946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:21.167 [2024-07-22 16:21:25.340998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:44:21.167 [2024-07-22 16:21:25.341022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:21.167 [2024-07-22 16:21:25.344070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:21.167 [2024-07-22 16:21:25.344123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:44:21.167 [2024-07-22 16:21:25.344265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:44:21.167 [2024-07-22 16:21:25.344355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:21.167 BaseBdev1 00:44:21.167 16:21:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:44:21.167 16:21:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:44:21.167 16:21:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:44:21.425 16:21:25 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:44:21.683 [2024-07-22 16:21:25.944962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:44:21.683 [2024-07-22 16:21:25.945099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:21.683 [2024-07-22 16:21:25.945145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:44:21.683 [2024-07-22 16:21:25.945167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:21.683 [2024-07-22 16:21:25.945880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:21.683 [2024-07-22 16:21:25.945931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:44:21.683 [2024-07-22 16:21:25.946084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:44:21.683 [2024-07-22 16:21:25.946112] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:44:21.683 [2024-07-22 16:21:25.946128] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:21.683 [2024-07-22 16:21:25.946175] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state configuring 00:44:21.683 [2024-07-22 16:21:25.946275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:21.683 BaseBdev2 00:44:21.942 16:21:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:44:21.942 16:21:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:44:21.942 16:21:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:44:22.200 16:21:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:44:22.458 [2024-07-22 16:21:26.573142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:44:22.458 [2024-07-22 16:21:26.573261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:22.458 [2024-07-22 16:21:26.573317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:44:22.458 [2024-07-22 16:21:26.573336] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:22.458 [2024-07-22 16:21:26.573979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:22.458 [2024-07-22 16:21:26.574038] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:44:22.458 [2024-07-22 16:21:26.574166] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:44:22.458 [2024-07-22 16:21:26.574215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:22.458 BaseBdev3 00:44:22.458 16:21:26 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:44:22.715 16:21:26 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:44:22.995 [2024-07-22 16:21:27.101360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:44:22.995 [2024-07-22 16:21:27.101482] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:22.995 [2024-07-22 16:21:27.101531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:44:22.995 [2024-07-22 16:21:27.101548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:22.995 [2024-07-22 16:21:27.102181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:22.995 [2024-07-22 16:21:27.102232] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:44:22.995 [2024-07-22 16:21:27.102364] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:44:22.995 [2024-07-22 16:21:27.102400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:44:22.995 spare 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:22.995 16:21:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:22.996 16:21:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:22.996 16:21:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:22.996 [2024-07-22 16:21:27.202584] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b480 00:44:22.996 [2024-07-22 16:21:27.202670] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:44:22.996 [2024-07-22 16:21:27.202911] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000046fb0 00:44:22.996 [2024-07-22 16:21:27.209559] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b480 00:44:22.996 [2024-07-22 16:21:27.209696] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b480 00:44:22.996 [2024-07-22 16:21:27.210029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:23.276 16:21:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:23.276 "name": "raid_bdev1", 00:44:23.276 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:23.276 "strip_size_kb": 64, 00:44:23.276 "state": "online", 00:44:23.276 "raid_level": "raid5f", 00:44:23.276 "superblock": true, 00:44:23.276 "num_base_bdevs": 3, 00:44:23.276 "num_base_bdevs_discovered": 3, 00:44:23.276 "num_base_bdevs_operational": 3, 00:44:23.276 "base_bdevs_list": [ 00:44:23.276 { 00:44:23.276 "name": "spare", 00:44:23.276 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:23.276 "is_configured": true, 00:44:23.276 "data_offset": 2048, 00:44:23.276 "data_size": 63488 00:44:23.276 }, 00:44:23.276 { 00:44:23.276 "name": "BaseBdev2", 00:44:23.276 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:23.276 "is_configured": true, 00:44:23.276 "data_offset": 2048, 00:44:23.276 "data_size": 63488 00:44:23.276 }, 00:44:23.276 { 00:44:23.276 "name": "BaseBdev3", 00:44:23.276 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:23.276 "is_configured": true, 00:44:23.276 "data_offset": 2048, 00:44:23.276 "data_size": 63488 00:44:23.276 } 00:44:23.276 ] 00:44:23.276 }' 00:44:23.276 16:21:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:23.276 16:21:27 -- common/autotest_common.sh@10 -- # set +x 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:23.534 16:21:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:44:24.100 "name": "raid_bdev1", 00:44:24.100 "uuid": "3d7ab3a7-657e-43e3-9f22-4a597d110d62", 00:44:24.100 "strip_size_kb": 64, 00:44:24.100 "state": "online", 00:44:24.100 "raid_level": "raid5f", 00:44:24.100 "superblock": true, 00:44:24.100 "num_base_bdevs": 3, 00:44:24.100 "num_base_bdevs_discovered": 3, 00:44:24.100 "num_base_bdevs_operational": 3, 00:44:24.100 "base_bdevs_list": [ 00:44:24.100 { 00:44:24.100 "name": "spare", 00:44:24.100 "uuid": "97c1d297-ef4c-5048-b9a4-ce0997d8a262", 00:44:24.100 "is_configured": true, 00:44:24.100 "data_offset": 2048, 00:44:24.100 "data_size": 63488 00:44:24.100 }, 00:44:24.100 { 00:44:24.100 "name": "BaseBdev2", 00:44:24.100 "uuid": "28252262-29d5-5280-aa88-187cd373696d", 00:44:24.100 "is_configured": true, 00:44:24.100 "data_offset": 2048, 00:44:24.100 "data_size": 63488 00:44:24.100 }, 00:44:24.100 { 00:44:24.100 "name": "BaseBdev3", 00:44:24.100 "uuid": "4c7da165-cfad-5321-8600-5282540a51f2", 00:44:24.100 "is_configured": true, 00:44:24.100 "data_offset": 2048, 00:44:24.100 "data_size": 63488 00:44:24.100 } 00:44:24.100 ] 00:44:24.100 }' 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:44:24.100 16:21:28 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:24.358 16:21:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:44:24.358 16:21:28 -- bdev/bdev_raid.sh@709 -- # killprocess 85586 00:44:24.358 16:21:28 -- common/autotest_common.sh@926 -- # '[' -z 85586 ']' 00:44:24.358 16:21:28 -- common/autotest_common.sh@930 -- # kill -0 85586 00:44:24.358 16:21:28 -- common/autotest_common.sh@931 -- # uname 00:44:24.358 16:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:24.358 16:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85586 00:44:24.358 killing process with pid 85586 00:44:24.358 Received shutdown signal, test time was about 60.000000 seconds 00:44:24.358 00:44:24.358 Latency(us) 00:44:24.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.358 =================================================================================================================== 00:44:24.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:24.358 16:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:44:24.358 16:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:44:24.358 16:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85586' 00:44:24.358 16:21:28 -- common/autotest_common.sh@945 -- # kill 85586 00:44:24.358 16:21:28 -- common/autotest_common.sh@950 -- # wait 85586 00:44:24.358 [2024-07-22 16:21:28.461531] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:24.358 [2024-07-22 16:21:28.461675] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:24.358 [2024-07-22 16:21:28.461822] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:24.358 [2024-07-22 16:21:28.461845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b480 name raid_bdev1, state offline 00:44:24.616 [2024-07-22 16:21:28.874886] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:25.990 16:21:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:44:25.990 00:44:25.990 real 0m25.426s 00:44:25.990 user 0m37.853s 00:44:25.990 sys 0m3.887s 00:44:25.990 16:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:25.990 ************************************ 00:44:25.990 END TEST raid5f_rebuild_test_sb 00:44:25.990 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:44:25.990 ************************************ 00:44:25.990 16:21:30 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:44:25.990 16:21:30 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:44:25.990 16:21:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:44:25.990 16:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:25.990 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:44:26.274 ************************************ 00:44:26.274 START TEST raid5f_state_function_test 00:44:26.274 ************************************ 00:44:26.274 16:21:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:44:26.274 Process raid pid: 86188 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=86188 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 86188' 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:44:26.274 16:21:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 86188 /var/tmp/spdk-raid.sock 00:44:26.274 16:21:30 -- common/autotest_common.sh@819 -- # '[' -z 86188 ']' 00:44:26.274 16:21:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:44:26.274 16:21:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:26.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:44:26.274 16:21:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:44:26.274 16:21:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:26.274 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:44:26.274 [2024-07-22 16:21:30.357354] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:44:26.274 [2024-07-22 16:21:30.357587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:26.532 [2024-07-22 16:21:30.548276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.790 [2024-07-22 16:21:30.816564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.790 [2024-07-22 16:21:31.039068] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:27.356 16:21:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:27.356 16:21:31 -- common/autotest_common.sh@852 -- # return 0 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:27.356 [2024-07-22 16:21:31.568460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:27.356 [2024-07-22 16:21:31.568816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:27.356 [2024-07-22 16:21:31.568845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:27.356 [2024-07-22 16:21:31.568865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:27.356 [2024-07-22 16:21:31.568876] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:27.356 [2024-07-22 16:21:31.568892] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:27.356 [2024-07-22 16:21:31.568901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:27.356 [2024-07-22 16:21:31.568916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:27.356 16:21:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:27.615 16:21:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:27.615 "name": "Existed_Raid", 00:44:27.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.615 "strip_size_kb": 64, 00:44:27.615 "state": "configuring", 00:44:27.615 "raid_level": "raid5f", 00:44:27.615 "superblock": false, 00:44:27.615 "num_base_bdevs": 4, 00:44:27.615 "num_base_bdevs_discovered": 0, 00:44:27.615 "num_base_bdevs_operational": 4, 00:44:27.615 "base_bdevs_list": [ 00:44:27.615 { 00:44:27.615 "name": "BaseBdev1", 00:44:27.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.615 "is_configured": false, 00:44:27.615 "data_offset": 0, 00:44:27.615 "data_size": 0 00:44:27.615 }, 00:44:27.615 { 00:44:27.615 "name": "BaseBdev2", 00:44:27.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.615 "is_configured": false, 00:44:27.615 "data_offset": 0, 00:44:27.615 "data_size": 0 00:44:27.615 }, 00:44:27.615 { 00:44:27.615 "name": "BaseBdev3", 00:44:27.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.615 "is_configured": false, 00:44:27.615 "data_offset": 0, 00:44:27.615 "data_size": 0 00:44:27.615 }, 00:44:27.615 { 00:44:27.615 "name": "BaseBdev4", 00:44:27.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:27.615 "is_configured": false, 00:44:27.615 "data_offset": 0, 00:44:27.615 "data_size": 0 00:44:27.615 } 00:44:27.615 ] 00:44:27.615 }' 00:44:27.615 16:21:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:27.615 16:21:31 -- common/autotest_common.sh@10 -- # set +x 00:44:28.179 16:21:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:44:28.179 [2024-07-22 16:21:32.388560] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:28.179 [2024-07-22 16:21:32.388650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:44:28.179 16:21:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:28.437 [2024-07-22 16:21:32.632655] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:28.437 [2024-07-22 16:21:32.633032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:28.437 [2024-07-22 16:21:32.633061] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:28.437 [2024-07-22 16:21:32.633081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:28.437 [2024-07-22 16:21:32.633091] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:28.437 [2024-07-22 16:21:32.633107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:28.437 [2024-07-22 16:21:32.633116] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:28.437 [2024-07-22 16:21:32.633132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:28.437 16:21:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:44:29.013 [2024-07-22 16:21:32.976330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:29.013 BaseBdev1 00:44:29.013 16:21:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:44:29.013 16:21:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:44:29.013 16:21:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:29.013 16:21:32 -- common/autotest_common.sh@889 -- # local i 00:44:29.013 16:21:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:29.013 16:21:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:29.013 16:21:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:29.013 16:21:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:29.271 [ 00:44:29.271 { 00:44:29.271 "name": "BaseBdev1", 00:44:29.271 "aliases": [ 00:44:29.271 "4d28c662-70ae-46d5-b4e2-c924093ed4bf" 00:44:29.271 ], 00:44:29.271 "product_name": "Malloc disk", 00:44:29.271 "block_size": 512, 00:44:29.271 "num_blocks": 65536, 00:44:29.271 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:29.271 "assigned_rate_limits": { 00:44:29.271 "rw_ios_per_sec": 0, 00:44:29.271 "rw_mbytes_per_sec": 0, 00:44:29.271 "r_mbytes_per_sec": 0, 00:44:29.271 "w_mbytes_per_sec": 0 00:44:29.271 }, 00:44:29.271 "claimed": true, 00:44:29.271 "claim_type": "exclusive_write", 00:44:29.271 "zoned": false, 00:44:29.271 "supported_io_types": { 00:44:29.271 "read": true, 00:44:29.271 "write": true, 00:44:29.271 "unmap": true, 00:44:29.271 "write_zeroes": true, 00:44:29.271 "flush": true, 00:44:29.271 "reset": true, 00:44:29.271 "compare": false, 00:44:29.271 "compare_and_write": false, 00:44:29.271 "abort": true, 00:44:29.271 "nvme_admin": false, 00:44:29.271 "nvme_io": false 00:44:29.271 }, 00:44:29.271 "memory_domains": [ 00:44:29.271 { 00:44:29.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:29.271 "dma_device_type": 2 00:44:29.271 } 00:44:29.271 ], 00:44:29.271 "driver_specific": {} 00:44:29.271 } 00:44:29.271 ] 00:44:29.271 16:21:33 -- common/autotest_common.sh@895 -- # return 0 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:29.271 16:21:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:29.272 16:21:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:29.272 16:21:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:29.272 16:21:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:29.529 16:21:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:29.529 "name": "Existed_Raid", 00:44:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:29.529 "strip_size_kb": 64, 00:44:29.529 "state": "configuring", 00:44:29.529 "raid_level": "raid5f", 00:44:29.529 "superblock": false, 00:44:29.529 "num_base_bdevs": 4, 00:44:29.529 "num_base_bdevs_discovered": 1, 00:44:29.529 "num_base_bdevs_operational": 4, 00:44:29.529 "base_bdevs_list": [ 00:44:29.529 { 00:44:29.529 "name": "BaseBdev1", 00:44:29.529 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:29.529 "is_configured": true, 00:44:29.529 "data_offset": 0, 00:44:29.529 "data_size": 65536 00:44:29.529 }, 00:44:29.529 { 00:44:29.529 "name": "BaseBdev2", 00:44:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:29.529 "is_configured": false, 00:44:29.529 "data_offset": 0, 00:44:29.529 "data_size": 0 00:44:29.529 }, 00:44:29.529 { 00:44:29.529 "name": "BaseBdev3", 00:44:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:29.529 "is_configured": false, 00:44:29.529 "data_offset": 0, 00:44:29.529 "data_size": 0 00:44:29.529 }, 00:44:29.529 { 00:44:29.529 "name": "BaseBdev4", 00:44:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:29.529 "is_configured": false, 00:44:29.529 "data_offset": 0, 00:44:29.529 "data_size": 0 00:44:29.529 } 00:44:29.529 ] 00:44:29.529 }' 00:44:29.529 16:21:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:29.529 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:44:29.787 16:21:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:44:30.045 [2024-07-22 16:21:34.304736] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:30.046 [2024-07-22 16:21:34.304824] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:44:30.303 16:21:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:44:30.303 16:21:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:30.303 [2024-07-22 16:21:34.572887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:30.303 [2024-07-22 16:21:34.575731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:30.303 [2024-07-22 16:21:34.575819] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:30.303 [2024-07-22 16:21:34.575845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:30.303 [2024-07-22 16:21:34.575874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:30.303 [2024-07-22 16:21:34.575894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:30.303 [2024-07-22 16:21:34.575926] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:30.565 "name": "Existed_Raid", 00:44:30.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:30.565 "strip_size_kb": 64, 00:44:30.565 "state": "configuring", 00:44:30.565 "raid_level": "raid5f", 00:44:30.565 "superblock": false, 00:44:30.565 "num_base_bdevs": 4, 00:44:30.565 "num_base_bdevs_discovered": 1, 00:44:30.565 "num_base_bdevs_operational": 4, 00:44:30.565 "base_bdevs_list": [ 00:44:30.565 { 00:44:30.565 "name": "BaseBdev1", 00:44:30.565 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:30.565 "is_configured": true, 00:44:30.565 "data_offset": 0, 00:44:30.565 "data_size": 65536 00:44:30.565 }, 00:44:30.565 { 00:44:30.565 "name": "BaseBdev2", 00:44:30.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:30.565 "is_configured": false, 00:44:30.565 "data_offset": 0, 00:44:30.565 "data_size": 0 00:44:30.565 }, 00:44:30.565 { 00:44:30.565 "name": "BaseBdev3", 00:44:30.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:30.565 "is_configured": false, 00:44:30.565 "data_offset": 0, 00:44:30.565 "data_size": 0 00:44:30.565 }, 00:44:30.565 { 00:44:30.565 "name": "BaseBdev4", 00:44:30.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:30.565 "is_configured": false, 00:44:30.565 "data_offset": 0, 00:44:30.565 "data_size": 0 00:44:30.565 } 00:44:30.565 ] 00:44:30.565 }' 00:44:30.565 16:21:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:30.565 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:44:31.144 16:21:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:44:31.402 [2024-07-22 16:21:35.489192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:31.402 BaseBdev2 00:44:31.402 16:21:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:44:31.402 16:21:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:44:31.402 16:21:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:31.402 16:21:35 -- common/autotest_common.sh@889 -- # local i 00:44:31.402 16:21:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:31.402 16:21:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:31.402 16:21:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:31.660 16:21:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:31.919 [ 00:44:31.919 { 00:44:31.919 "name": "BaseBdev2", 00:44:31.919 "aliases": [ 00:44:31.919 "5d63e45d-f2dc-49c1-a0f3-c94010db574c" 00:44:31.919 ], 00:44:31.919 "product_name": "Malloc disk", 00:44:31.919 "block_size": 512, 00:44:31.919 "num_blocks": 65536, 00:44:31.919 "uuid": "5d63e45d-f2dc-49c1-a0f3-c94010db574c", 00:44:31.919 "assigned_rate_limits": { 00:44:31.919 "rw_ios_per_sec": 0, 00:44:31.919 "rw_mbytes_per_sec": 0, 00:44:31.919 "r_mbytes_per_sec": 0, 00:44:31.919 "w_mbytes_per_sec": 0 00:44:31.919 }, 00:44:31.919 "claimed": true, 00:44:31.919 "claim_type": "exclusive_write", 00:44:31.919 "zoned": false, 00:44:31.919 "supported_io_types": { 00:44:31.919 "read": true, 00:44:31.919 "write": true, 00:44:31.919 "unmap": true, 00:44:31.919 "write_zeroes": true, 00:44:31.919 "flush": true, 00:44:31.919 "reset": true, 00:44:31.919 "compare": false, 00:44:31.919 "compare_and_write": false, 00:44:31.919 "abort": true, 00:44:31.919 "nvme_admin": false, 00:44:31.919 "nvme_io": false 00:44:31.919 }, 00:44:31.919 "memory_domains": [ 00:44:31.919 { 00:44:31.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:31.919 "dma_device_type": 2 00:44:31.919 } 00:44:31.919 ], 00:44:31.919 "driver_specific": {} 00:44:31.919 } 00:44:31.919 ] 00:44:31.919 16:21:35 -- common/autotest_common.sh@895 -- # return 0 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:31.919 16:21:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:32.177 16:21:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:32.177 "name": "Existed_Raid", 00:44:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.177 "strip_size_kb": 64, 00:44:32.177 "state": "configuring", 00:44:32.177 "raid_level": "raid5f", 00:44:32.177 "superblock": false, 00:44:32.177 "num_base_bdevs": 4, 00:44:32.177 "num_base_bdevs_discovered": 2, 00:44:32.177 "num_base_bdevs_operational": 4, 00:44:32.177 "base_bdevs_list": [ 00:44:32.177 { 00:44:32.177 "name": "BaseBdev1", 00:44:32.177 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:32.177 "is_configured": true, 00:44:32.177 "data_offset": 0, 00:44:32.177 "data_size": 65536 00:44:32.177 }, 00:44:32.177 { 00:44:32.177 "name": "BaseBdev2", 00:44:32.177 "uuid": "5d63e45d-f2dc-49c1-a0f3-c94010db574c", 00:44:32.177 "is_configured": true, 00:44:32.177 "data_offset": 0, 00:44:32.177 "data_size": 65536 00:44:32.177 }, 00:44:32.177 { 00:44:32.177 "name": "BaseBdev3", 00:44:32.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.177 "is_configured": false, 00:44:32.178 "data_offset": 0, 00:44:32.178 "data_size": 0 00:44:32.178 }, 00:44:32.178 { 00:44:32.178 "name": "BaseBdev4", 00:44:32.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:32.178 "is_configured": false, 00:44:32.178 "data_offset": 0, 00:44:32.178 "data_size": 0 00:44:32.178 } 00:44:32.178 ] 00:44:32.178 }' 00:44:32.178 16:21:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:32.178 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:44:32.436 16:21:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:44:32.697 [2024-07-22 16:21:36.811120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:32.697 BaseBdev3 00:44:32.697 16:21:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:44:32.697 16:21:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:44:32.697 16:21:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:32.697 16:21:36 -- common/autotest_common.sh@889 -- # local i 00:44:32.697 16:21:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:32.697 16:21:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:32.697 16:21:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:32.955 16:21:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:33.213 [ 00:44:33.213 { 00:44:33.213 "name": "BaseBdev3", 00:44:33.213 "aliases": [ 00:44:33.213 "4f129d88-4450-4cb5-ba64-e01c1c523e9c" 00:44:33.213 ], 00:44:33.213 "product_name": "Malloc disk", 00:44:33.213 "block_size": 512, 00:44:33.213 "num_blocks": 65536, 00:44:33.213 "uuid": "4f129d88-4450-4cb5-ba64-e01c1c523e9c", 00:44:33.213 "assigned_rate_limits": { 00:44:33.213 "rw_ios_per_sec": 0, 00:44:33.213 "rw_mbytes_per_sec": 0, 00:44:33.213 "r_mbytes_per_sec": 0, 00:44:33.213 "w_mbytes_per_sec": 0 00:44:33.213 }, 00:44:33.213 "claimed": true, 00:44:33.213 "claim_type": "exclusive_write", 00:44:33.213 "zoned": false, 00:44:33.213 "supported_io_types": { 00:44:33.213 "read": true, 00:44:33.213 "write": true, 00:44:33.213 "unmap": true, 00:44:33.213 "write_zeroes": true, 00:44:33.213 "flush": true, 00:44:33.213 "reset": true, 00:44:33.213 "compare": false, 00:44:33.213 "compare_and_write": false, 00:44:33.213 "abort": true, 00:44:33.213 "nvme_admin": false, 00:44:33.213 "nvme_io": false 00:44:33.213 }, 00:44:33.213 "memory_domains": [ 00:44:33.213 { 00:44:33.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:33.213 "dma_device_type": 2 00:44:33.213 } 00:44:33.213 ], 00:44:33.213 "driver_specific": {} 00:44:33.213 } 00:44:33.213 ] 00:44:33.213 16:21:37 -- common/autotest_common.sh@895 -- # return 0 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:33.213 16:21:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:33.472 16:21:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:33.472 "name": "Existed_Raid", 00:44:33.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:33.472 "strip_size_kb": 64, 00:44:33.472 "state": "configuring", 00:44:33.472 "raid_level": "raid5f", 00:44:33.472 "superblock": false, 00:44:33.472 "num_base_bdevs": 4, 00:44:33.472 "num_base_bdevs_discovered": 3, 00:44:33.472 "num_base_bdevs_operational": 4, 00:44:33.472 "base_bdevs_list": [ 00:44:33.472 { 00:44:33.472 "name": "BaseBdev1", 00:44:33.472 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:33.472 "is_configured": true, 00:44:33.472 "data_offset": 0, 00:44:33.472 "data_size": 65536 00:44:33.472 }, 00:44:33.472 { 00:44:33.472 "name": "BaseBdev2", 00:44:33.472 "uuid": "5d63e45d-f2dc-49c1-a0f3-c94010db574c", 00:44:33.472 "is_configured": true, 00:44:33.472 "data_offset": 0, 00:44:33.472 "data_size": 65536 00:44:33.472 }, 00:44:33.472 { 00:44:33.472 "name": "BaseBdev3", 00:44:33.472 "uuid": "4f129d88-4450-4cb5-ba64-e01c1c523e9c", 00:44:33.472 "is_configured": true, 00:44:33.472 "data_offset": 0, 00:44:33.472 "data_size": 65536 00:44:33.472 }, 00:44:33.472 { 00:44:33.472 "name": "BaseBdev4", 00:44:33.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:33.472 "is_configured": false, 00:44:33.472 "data_offset": 0, 00:44:33.472 "data_size": 0 00:44:33.472 } 00:44:33.472 ] 00:44:33.472 }' 00:44:33.472 16:21:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:33.472 16:21:37 -- common/autotest_common.sh@10 -- # set +x 00:44:33.763 16:21:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:44:34.022 [2024-07-22 16:21:38.220933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:34.022 [2024-07-22 16:21:38.221075] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:44:34.022 [2024-07-22 16:21:38.221100] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:44:34.022 [2024-07-22 16:21:38.221240] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:44:34.022 [2024-07-22 16:21:38.228237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:44:34.022 [2024-07-22 16:21:38.228281] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:44:34.022 [2024-07-22 16:21:38.228644] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:34.022 BaseBdev4 00:44:34.022 16:21:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:44:34.022 16:21:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:44:34.022 16:21:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:34.022 16:21:38 -- common/autotest_common.sh@889 -- # local i 00:44:34.022 16:21:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:34.022 16:21:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:34.022 16:21:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:34.280 16:21:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:34.539 [ 00:44:34.539 { 00:44:34.539 "name": "BaseBdev4", 00:44:34.539 "aliases": [ 00:44:34.539 "ac941bf4-2777-4a17-9ebf-a3745d72c979" 00:44:34.539 ], 00:44:34.539 "product_name": "Malloc disk", 00:44:34.539 "block_size": 512, 00:44:34.539 "num_blocks": 65536, 00:44:34.539 "uuid": "ac941bf4-2777-4a17-9ebf-a3745d72c979", 00:44:34.539 "assigned_rate_limits": { 00:44:34.539 "rw_ios_per_sec": 0, 00:44:34.539 "rw_mbytes_per_sec": 0, 00:44:34.539 "r_mbytes_per_sec": 0, 00:44:34.539 "w_mbytes_per_sec": 0 00:44:34.539 }, 00:44:34.539 "claimed": true, 00:44:34.539 "claim_type": "exclusive_write", 00:44:34.539 "zoned": false, 00:44:34.539 "supported_io_types": { 00:44:34.539 "read": true, 00:44:34.539 "write": true, 00:44:34.539 "unmap": true, 00:44:34.539 "write_zeroes": true, 00:44:34.539 "flush": true, 00:44:34.539 "reset": true, 00:44:34.539 "compare": false, 00:44:34.539 "compare_and_write": false, 00:44:34.539 "abort": true, 00:44:34.539 "nvme_admin": false, 00:44:34.539 "nvme_io": false 00:44:34.539 }, 00:44:34.539 "memory_domains": [ 00:44:34.539 { 00:44:34.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:34.539 "dma_device_type": 2 00:44:34.539 } 00:44:34.539 ], 00:44:34.539 "driver_specific": {} 00:44:34.539 } 00:44:34.539 ] 00:44:34.539 16:21:38 -- common/autotest_common.sh@895 -- # return 0 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:34.539 16:21:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:34.540 16:21:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:34.540 16:21:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:34.799 16:21:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:34.799 "name": "Existed_Raid", 00:44:34.799 "uuid": "ae012e6a-0b2f-4f75-91ed-e5cb8729deaa", 00:44:34.799 "strip_size_kb": 64, 00:44:34.799 "state": "online", 00:44:34.799 "raid_level": "raid5f", 00:44:34.799 "superblock": false, 00:44:34.799 "num_base_bdevs": 4, 00:44:34.799 "num_base_bdevs_discovered": 4, 00:44:34.799 "num_base_bdevs_operational": 4, 00:44:34.799 "base_bdevs_list": [ 00:44:34.799 { 00:44:34.799 "name": "BaseBdev1", 00:44:34.799 "uuid": "4d28c662-70ae-46d5-b4e2-c924093ed4bf", 00:44:34.799 "is_configured": true, 00:44:34.799 "data_offset": 0, 00:44:34.799 "data_size": 65536 00:44:34.799 }, 00:44:34.799 { 00:44:34.799 "name": "BaseBdev2", 00:44:34.799 "uuid": "5d63e45d-f2dc-49c1-a0f3-c94010db574c", 00:44:34.799 "is_configured": true, 00:44:34.799 "data_offset": 0, 00:44:34.799 "data_size": 65536 00:44:34.799 }, 00:44:34.799 { 00:44:34.799 "name": "BaseBdev3", 00:44:34.799 "uuid": "4f129d88-4450-4cb5-ba64-e01c1c523e9c", 00:44:34.799 "is_configured": true, 00:44:34.799 "data_offset": 0, 00:44:34.799 "data_size": 65536 00:44:34.799 }, 00:44:34.799 { 00:44:34.799 "name": "BaseBdev4", 00:44:34.799 "uuid": "ac941bf4-2777-4a17-9ebf-a3745d72c979", 00:44:34.799 "is_configured": true, 00:44:34.799 "data_offset": 0, 00:44:34.799 "data_size": 65536 00:44:34.799 } 00:44:34.799 ] 00:44:34.799 }' 00:44:34.799 16:21:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:34.799 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:44:35.366 16:21:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:44:35.366 [2024-07-22 16:21:39.548826] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:35.625 16:21:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:35.882 16:21:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:35.882 "name": "Existed_Raid", 00:44:35.882 "uuid": "ae012e6a-0b2f-4f75-91ed-e5cb8729deaa", 00:44:35.882 "strip_size_kb": 64, 00:44:35.882 "state": "online", 00:44:35.882 "raid_level": "raid5f", 00:44:35.882 "superblock": false, 00:44:35.882 "num_base_bdevs": 4, 00:44:35.882 "num_base_bdevs_discovered": 3, 00:44:35.882 "num_base_bdevs_operational": 3, 00:44:35.882 "base_bdevs_list": [ 00:44:35.882 { 00:44:35.882 "name": null, 00:44:35.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:35.882 "is_configured": false, 00:44:35.882 "data_offset": 0, 00:44:35.882 "data_size": 65536 00:44:35.882 }, 00:44:35.882 { 00:44:35.882 "name": "BaseBdev2", 00:44:35.882 "uuid": "5d63e45d-f2dc-49c1-a0f3-c94010db574c", 00:44:35.882 "is_configured": true, 00:44:35.882 "data_offset": 0, 00:44:35.882 "data_size": 65536 00:44:35.882 }, 00:44:35.882 { 00:44:35.882 "name": "BaseBdev3", 00:44:35.882 "uuid": "4f129d88-4450-4cb5-ba64-e01c1c523e9c", 00:44:35.882 "is_configured": true, 00:44:35.882 "data_offset": 0, 00:44:35.882 "data_size": 65536 00:44:35.882 }, 00:44:35.882 { 00:44:35.882 "name": "BaseBdev4", 00:44:35.882 "uuid": "ac941bf4-2777-4a17-9ebf-a3745d72c979", 00:44:35.882 "is_configured": true, 00:44:35.882 "data_offset": 0, 00:44:35.882 "data_size": 65536 00:44:35.882 } 00:44:35.882 ] 00:44:35.882 }' 00:44:35.882 16:21:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:35.882 16:21:39 -- common/autotest_common.sh@10 -- # set +x 00:44:36.139 16:21:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:44:36.140 16:21:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:36.140 16:21:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:36.140 16:21:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:36.397 16:21:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:36.397 16:21:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:36.397 16:21:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:44:36.654 [2024-07-22 16:21:40.733033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:36.654 [2024-07-22 16:21:40.733104] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:36.654 [2024-07-22 16:21:40.733175] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:36.654 16:21:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:36.654 16:21:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:36.654 16:21:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:36.654 16:21:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:37.221 16:21:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:37.221 16:21:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:37.222 16:21:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:44:37.222 [2024-07-22 16:21:41.424825] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:37.484 16:21:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:37.484 16:21:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:37.484 16:21:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:37.484 16:21:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:37.758 16:21:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:37.758 16:21:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:37.758 16:21:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:44:38.016 [2024-07-22 16:21:42.084852] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:44:38.016 [2024-07-22 16:21:42.085301] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:44:38.016 16:21:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:38.016 16:21:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:38.016 16:21:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:38.016 16:21:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:44:38.581 16:21:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:44:38.581 16:21:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:44:38.581 16:21:42 -- bdev/bdev_raid.sh@287 -- # killprocess 86188 00:44:38.581 16:21:42 -- common/autotest_common.sh@926 -- # '[' -z 86188 ']' 00:44:38.581 16:21:42 -- common/autotest_common.sh@930 -- # kill -0 86188 00:44:38.581 16:21:42 -- common/autotest_common.sh@931 -- # uname 00:44:38.581 16:21:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:38.581 16:21:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86188 00:44:38.581 killing process with pid 86188 00:44:38.581 16:21:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:44:38.581 16:21:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:44:38.581 16:21:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86188' 00:44:38.581 16:21:42 -- common/autotest_common.sh@945 -- # kill 86188 00:44:38.581 16:21:42 -- common/autotest_common.sh@950 -- # wait 86188 00:44:38.581 [2024-07-22 16:21:42.633636] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:38.581 [2024-07-22 16:21:42.633791] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:39.957 ************************************ 00:44:39.957 END TEST raid5f_state_function_test 00:44:39.957 ************************************ 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:44:39.957 00:44:39.957 real 0m13.822s 00:44:39.957 user 0m22.703s 00:44:39.957 sys 0m2.305s 00:44:39.957 16:21:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:39.957 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:44:39.957 16:21:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:44:39.957 16:21:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:39.957 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:44:39.957 ************************************ 00:44:39.957 START TEST raid5f_state_function_test_sb 00:44:39.957 ************************************ 00:44:39.957 16:21:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=86592 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 86592' 00:44:39.957 Process raid pid: 86592 00:44:39.957 16:21:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 86592 /var/tmp/spdk-raid.sock 00:44:39.957 16:21:44 -- common/autotest_common.sh@819 -- # '[' -z 86592 ']' 00:44:39.957 16:21:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:44:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:44:39.957 16:21:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:39.957 16:21:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:44:39.957 16:21:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:39.957 16:21:44 -- common/autotest_common.sh@10 -- # set +x 00:44:39.957 [2024-07-22 16:21:44.225514] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:44:39.957 [2024-07-22 16:21:44.225694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:40.215 [2024-07-22 16:21:44.401280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.474 [2024-07-22 16:21:44.709720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.737 [2024-07-22 16:21:44.928176] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:41.020 16:21:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:41.020 16:21:45 -- common/autotest_common.sh@852 -- # return 0 00:44:41.020 16:21:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:41.278 [2024-07-22 16:21:45.512933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:41.278 [2024-07-22 16:21:45.513082] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:41.278 [2024-07-22 16:21:45.513114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:41.278 [2024-07-22 16:21:45.513147] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:41.278 [2024-07-22 16:21:45.513160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:41.278 [2024-07-22 16:21:45.513179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:41.278 [2024-07-22 16:21:45.513190] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:41.278 [2024-07-22 16:21:45.513207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:41.278 16:21:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:41.278 16:21:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:41.279 16:21:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:41.844 16:21:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:41.844 "name": "Existed_Raid", 00:44:41.844 "uuid": "2277cee2-8811-4cfc-9ca8-57f7d4ff01c8", 00:44:41.844 "strip_size_kb": 64, 00:44:41.844 "state": "configuring", 00:44:41.844 "raid_level": "raid5f", 00:44:41.844 "superblock": true, 00:44:41.844 "num_base_bdevs": 4, 00:44:41.844 "num_base_bdevs_discovered": 0, 00:44:41.844 "num_base_bdevs_operational": 4, 00:44:41.844 "base_bdevs_list": [ 00:44:41.844 { 00:44:41.844 "name": "BaseBdev1", 00:44:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:41.844 "is_configured": false, 00:44:41.844 "data_offset": 0, 00:44:41.844 "data_size": 0 00:44:41.844 }, 00:44:41.844 { 00:44:41.844 "name": "BaseBdev2", 00:44:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:41.844 "is_configured": false, 00:44:41.844 "data_offset": 0, 00:44:41.844 "data_size": 0 00:44:41.844 }, 00:44:41.844 { 00:44:41.844 "name": "BaseBdev3", 00:44:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:41.844 "is_configured": false, 00:44:41.844 "data_offset": 0, 00:44:41.844 "data_size": 0 00:44:41.844 }, 00:44:41.844 { 00:44:41.844 "name": "BaseBdev4", 00:44:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:41.844 "is_configured": false, 00:44:41.844 "data_offset": 0, 00:44:41.844 "data_size": 0 00:44:41.844 } 00:44:41.844 ] 00:44:41.844 }' 00:44:41.844 16:21:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:41.844 16:21:45 -- common/autotest_common.sh@10 -- # set +x 00:44:42.102 16:21:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:44:42.359 [2024-07-22 16:21:46.413109] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:42.359 [2024-07-22 16:21:46.413208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:44:42.359 16:21:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:42.618 [2024-07-22 16:21:46.785293] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:44:42.618 [2024-07-22 16:21:46.785428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:44:42.618 [2024-07-22 16:21:46.785446] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:42.618 [2024-07-22 16:21:46.785465] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:42.618 [2024-07-22 16:21:46.785475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:42.618 [2024-07-22 16:21:46.785491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:42.618 [2024-07-22 16:21:46.785501] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:42.618 [2024-07-22 16:21:46.785519] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:42.618 16:21:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:44:42.876 [2024-07-22 16:21:47.117258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:42.876 BaseBdev1 00:44:42.876 16:21:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:44:42.876 16:21:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:44:42.876 16:21:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:42.876 16:21:47 -- common/autotest_common.sh@889 -- # local i 00:44:42.876 16:21:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:42.876 16:21:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:42.876 16:21:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:43.442 16:21:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:43.442 [ 00:44:43.442 { 00:44:43.442 "name": "BaseBdev1", 00:44:43.442 "aliases": [ 00:44:43.442 "e7f504d7-d0f9-4c5e-a26f-a48eecca371b" 00:44:43.442 ], 00:44:43.442 "product_name": "Malloc disk", 00:44:43.442 "block_size": 512, 00:44:43.442 "num_blocks": 65536, 00:44:43.442 "uuid": "e7f504d7-d0f9-4c5e-a26f-a48eecca371b", 00:44:43.442 "assigned_rate_limits": { 00:44:43.442 "rw_ios_per_sec": 0, 00:44:43.442 "rw_mbytes_per_sec": 0, 00:44:43.442 "r_mbytes_per_sec": 0, 00:44:43.442 "w_mbytes_per_sec": 0 00:44:43.442 }, 00:44:43.442 "claimed": true, 00:44:43.442 "claim_type": "exclusive_write", 00:44:43.442 "zoned": false, 00:44:43.442 "supported_io_types": { 00:44:43.442 "read": true, 00:44:43.442 "write": true, 00:44:43.442 "unmap": true, 00:44:43.442 "write_zeroes": true, 00:44:43.442 "flush": true, 00:44:43.442 "reset": true, 00:44:43.442 "compare": false, 00:44:43.442 "compare_and_write": false, 00:44:43.443 "abort": true, 00:44:43.443 "nvme_admin": false, 00:44:43.443 "nvme_io": false 00:44:43.443 }, 00:44:43.443 "memory_domains": [ 00:44:43.443 { 00:44:43.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:43.443 "dma_device_type": 2 00:44:43.443 } 00:44:43.443 ], 00:44:43.443 "driver_specific": {} 00:44:43.443 } 00:44:43.443 ] 00:44:43.443 16:21:47 -- common/autotest_common.sh@895 -- # return 0 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:43.443 16:21:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:44.008 16:21:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:44.008 "name": "Existed_Raid", 00:44:44.008 "uuid": "1b68a8cd-6bfa-4706-8aae-3b8db3559e51", 00:44:44.008 "strip_size_kb": 64, 00:44:44.008 "state": "configuring", 00:44:44.008 "raid_level": "raid5f", 00:44:44.008 "superblock": true, 00:44:44.008 "num_base_bdevs": 4, 00:44:44.008 "num_base_bdevs_discovered": 1, 00:44:44.008 "num_base_bdevs_operational": 4, 00:44:44.008 "base_bdevs_list": [ 00:44:44.008 { 00:44:44.008 "name": "BaseBdev1", 00:44:44.008 "uuid": "e7f504d7-d0f9-4c5e-a26f-a48eecca371b", 00:44:44.008 "is_configured": true, 00:44:44.008 "data_offset": 2048, 00:44:44.008 "data_size": 63488 00:44:44.008 }, 00:44:44.008 { 00:44:44.008 "name": "BaseBdev2", 00:44:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:44.008 "is_configured": false, 00:44:44.008 "data_offset": 0, 00:44:44.008 "data_size": 0 00:44:44.008 }, 00:44:44.008 { 00:44:44.008 "name": "BaseBdev3", 00:44:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:44.008 "is_configured": false, 00:44:44.008 "data_offset": 0, 00:44:44.008 "data_size": 0 00:44:44.008 }, 00:44:44.008 { 00:44:44.008 "name": "BaseBdev4", 00:44:44.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:44.008 "is_configured": false, 00:44:44.008 "data_offset": 0, 00:44:44.008 "data_size": 0 00:44:44.008 } 00:44:44.008 ] 00:44:44.008 }' 00:44:44.008 16:21:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:44.008 16:21:48 -- common/autotest_common.sh@10 -- # set +x 00:44:44.266 16:21:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:44:44.523 [2024-07-22 16:21:48.621901] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:44:44.523 [2024-07-22 16:21:48.622331] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:44:44.523 16:21:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:44:44.523 16:21:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:44:44.781 16:21:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:44:45.039 BaseBdev1 00:44:45.039 16:21:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:44:45.039 16:21:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:44:45.039 16:21:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:45.039 16:21:49 -- common/autotest_common.sh@889 -- # local i 00:44:45.039 16:21:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:45.039 16:21:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:45.039 16:21:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:45.297 16:21:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:44:45.555 [ 00:44:45.555 { 00:44:45.555 "name": "BaseBdev1", 00:44:45.555 "aliases": [ 00:44:45.555 "4ca83578-c054-4938-be7d-b4e6e11527a6" 00:44:45.555 ], 00:44:45.555 "product_name": "Malloc disk", 00:44:45.555 "block_size": 512, 00:44:45.555 "num_blocks": 65536, 00:44:45.555 "uuid": "4ca83578-c054-4938-be7d-b4e6e11527a6", 00:44:45.555 "assigned_rate_limits": { 00:44:45.555 "rw_ios_per_sec": 0, 00:44:45.555 "rw_mbytes_per_sec": 0, 00:44:45.555 "r_mbytes_per_sec": 0, 00:44:45.555 "w_mbytes_per_sec": 0 00:44:45.555 }, 00:44:45.555 "claimed": false, 00:44:45.555 "zoned": false, 00:44:45.555 "supported_io_types": { 00:44:45.555 "read": true, 00:44:45.555 "write": true, 00:44:45.555 "unmap": true, 00:44:45.555 "write_zeroes": true, 00:44:45.555 "flush": true, 00:44:45.555 "reset": true, 00:44:45.555 "compare": false, 00:44:45.555 "compare_and_write": false, 00:44:45.555 "abort": true, 00:44:45.555 "nvme_admin": false, 00:44:45.555 "nvme_io": false 00:44:45.555 }, 00:44:45.555 "memory_domains": [ 00:44:45.555 { 00:44:45.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:45.555 "dma_device_type": 2 00:44:45.555 } 00:44:45.555 ], 00:44:45.555 "driver_specific": {} 00:44:45.555 } 00:44:45.555 ] 00:44:45.813 16:21:49 -- common/autotest_common.sh@895 -- # return 0 00:44:45.813 16:21:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:44:45.813 [2024-07-22 16:21:50.080373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:44:45.813 [2024-07-22 16:21:50.083086] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:44:45.813 [2024-07-22 16:21:50.083168] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:44:45.813 [2024-07-22 16:21:50.083185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:44:45.813 [2024-07-22 16:21:50.083202] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:44:45.813 [2024-07-22 16:21:50.083212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:44:45.813 [2024-07-22 16:21:50.083231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:46.071 16:21:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:46.330 16:21:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:46.330 "name": "Existed_Raid", 00:44:46.330 "uuid": "da349949-5f4c-4e7b-9c2b-9d8422b2e8db", 00:44:46.330 "strip_size_kb": 64, 00:44:46.330 "state": "configuring", 00:44:46.330 "raid_level": "raid5f", 00:44:46.330 "superblock": true, 00:44:46.330 "num_base_bdevs": 4, 00:44:46.330 "num_base_bdevs_discovered": 1, 00:44:46.330 "num_base_bdevs_operational": 4, 00:44:46.330 "base_bdevs_list": [ 00:44:46.330 { 00:44:46.330 "name": "BaseBdev1", 00:44:46.330 "uuid": "4ca83578-c054-4938-be7d-b4e6e11527a6", 00:44:46.330 "is_configured": true, 00:44:46.330 "data_offset": 2048, 00:44:46.330 "data_size": 63488 00:44:46.330 }, 00:44:46.330 { 00:44:46.330 "name": "BaseBdev2", 00:44:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:46.330 "is_configured": false, 00:44:46.330 "data_offset": 0, 00:44:46.330 "data_size": 0 00:44:46.330 }, 00:44:46.330 { 00:44:46.330 "name": "BaseBdev3", 00:44:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:46.330 "is_configured": false, 00:44:46.330 "data_offset": 0, 00:44:46.330 "data_size": 0 00:44:46.330 }, 00:44:46.330 { 00:44:46.330 "name": "BaseBdev4", 00:44:46.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:46.330 "is_configured": false, 00:44:46.330 "data_offset": 0, 00:44:46.330 "data_size": 0 00:44:46.330 } 00:44:46.330 ] 00:44:46.330 }' 00:44:46.330 16:21:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:46.330 16:21:50 -- common/autotest_common.sh@10 -- # set +x 00:44:46.589 16:21:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:44:46.847 [2024-07-22 16:21:51.007445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:44:46.847 BaseBdev2 00:44:46.847 16:21:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:44:46.847 16:21:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:44:46.847 16:21:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:46.847 16:21:51 -- common/autotest_common.sh@889 -- # local i 00:44:46.847 16:21:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:46.847 16:21:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:46.847 16:21:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:47.105 16:21:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:44:47.363 [ 00:44:47.363 { 00:44:47.363 "name": "BaseBdev2", 00:44:47.363 "aliases": [ 00:44:47.363 "010980be-f7cb-465d-a012-e4037d86fe1a" 00:44:47.363 ], 00:44:47.363 "product_name": "Malloc disk", 00:44:47.363 "block_size": 512, 00:44:47.363 "num_blocks": 65536, 00:44:47.363 "uuid": "010980be-f7cb-465d-a012-e4037d86fe1a", 00:44:47.363 "assigned_rate_limits": { 00:44:47.363 "rw_ios_per_sec": 0, 00:44:47.363 "rw_mbytes_per_sec": 0, 00:44:47.363 "r_mbytes_per_sec": 0, 00:44:47.363 "w_mbytes_per_sec": 0 00:44:47.363 }, 00:44:47.363 "claimed": true, 00:44:47.363 "claim_type": "exclusive_write", 00:44:47.363 "zoned": false, 00:44:47.363 "supported_io_types": { 00:44:47.363 "read": true, 00:44:47.363 "write": true, 00:44:47.363 "unmap": true, 00:44:47.363 "write_zeroes": true, 00:44:47.363 "flush": true, 00:44:47.363 "reset": true, 00:44:47.363 "compare": false, 00:44:47.363 "compare_and_write": false, 00:44:47.363 "abort": true, 00:44:47.363 "nvme_admin": false, 00:44:47.363 "nvme_io": false 00:44:47.363 }, 00:44:47.363 "memory_domains": [ 00:44:47.363 { 00:44:47.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:47.363 "dma_device_type": 2 00:44:47.363 } 00:44:47.363 ], 00:44:47.363 "driver_specific": {} 00:44:47.363 } 00:44:47.363 ] 00:44:47.363 16:21:51 -- common/autotest_common.sh@895 -- # return 0 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:47.363 16:21:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:47.624 16:21:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:47.624 "name": "Existed_Raid", 00:44:47.624 "uuid": "da349949-5f4c-4e7b-9c2b-9d8422b2e8db", 00:44:47.624 "strip_size_kb": 64, 00:44:47.624 "state": "configuring", 00:44:47.624 "raid_level": "raid5f", 00:44:47.624 "superblock": true, 00:44:47.624 "num_base_bdevs": 4, 00:44:47.624 "num_base_bdevs_discovered": 2, 00:44:47.624 "num_base_bdevs_operational": 4, 00:44:47.624 "base_bdevs_list": [ 00:44:47.624 { 00:44:47.624 "name": "BaseBdev1", 00:44:47.624 "uuid": "4ca83578-c054-4938-be7d-b4e6e11527a6", 00:44:47.624 "is_configured": true, 00:44:47.624 "data_offset": 2048, 00:44:47.624 "data_size": 63488 00:44:47.624 }, 00:44:47.624 { 00:44:47.624 "name": "BaseBdev2", 00:44:47.624 "uuid": "010980be-f7cb-465d-a012-e4037d86fe1a", 00:44:47.624 "is_configured": true, 00:44:47.624 "data_offset": 2048, 00:44:47.624 "data_size": 63488 00:44:47.624 }, 00:44:47.624 { 00:44:47.624 "name": "BaseBdev3", 00:44:47.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:47.624 "is_configured": false, 00:44:47.624 "data_offset": 0, 00:44:47.624 "data_size": 0 00:44:47.624 }, 00:44:47.624 { 00:44:47.624 "name": "BaseBdev4", 00:44:47.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:47.624 "is_configured": false, 00:44:47.624 "data_offset": 0, 00:44:47.624 "data_size": 0 00:44:47.624 } 00:44:47.624 ] 00:44:47.624 }' 00:44:47.624 16:21:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:47.624 16:21:51 -- common/autotest_common.sh@10 -- # set +x 00:44:47.882 16:21:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:44:48.139 [2024-07-22 16:21:52.402793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:44:48.139 BaseBdev3 00:44:48.397 16:21:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:44:48.397 16:21:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:44:48.397 16:21:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:48.397 16:21:52 -- common/autotest_common.sh@889 -- # local i 00:44:48.397 16:21:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:48.397 16:21:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:48.397 16:21:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:48.654 16:21:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:44:48.654 [ 00:44:48.654 { 00:44:48.654 "name": "BaseBdev3", 00:44:48.654 "aliases": [ 00:44:48.654 "7e51deab-3346-4689-9539-6d333186f2b0" 00:44:48.654 ], 00:44:48.654 "product_name": "Malloc disk", 00:44:48.654 "block_size": 512, 00:44:48.654 "num_blocks": 65536, 00:44:48.654 "uuid": "7e51deab-3346-4689-9539-6d333186f2b0", 00:44:48.654 "assigned_rate_limits": { 00:44:48.654 "rw_ios_per_sec": 0, 00:44:48.654 "rw_mbytes_per_sec": 0, 00:44:48.654 "r_mbytes_per_sec": 0, 00:44:48.654 "w_mbytes_per_sec": 0 00:44:48.654 }, 00:44:48.654 "claimed": true, 00:44:48.654 "claim_type": "exclusive_write", 00:44:48.654 "zoned": false, 00:44:48.654 "supported_io_types": { 00:44:48.654 "read": true, 00:44:48.654 "write": true, 00:44:48.654 "unmap": true, 00:44:48.654 "write_zeroes": true, 00:44:48.654 "flush": true, 00:44:48.654 "reset": true, 00:44:48.654 "compare": false, 00:44:48.654 "compare_and_write": false, 00:44:48.654 "abort": true, 00:44:48.655 "nvme_admin": false, 00:44:48.655 "nvme_io": false 00:44:48.655 }, 00:44:48.655 "memory_domains": [ 00:44:48.655 { 00:44:48.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:48.655 "dma_device_type": 2 00:44:48.655 } 00:44:48.655 ], 00:44:48.655 "driver_specific": {} 00:44:48.655 } 00:44:48.655 ] 00:44:48.911 16:21:52 -- common/autotest_common.sh@895 -- # return 0 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:48.911 16:21:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:49.168 16:21:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:49.168 "name": "Existed_Raid", 00:44:49.168 "uuid": "da349949-5f4c-4e7b-9c2b-9d8422b2e8db", 00:44:49.168 "strip_size_kb": 64, 00:44:49.168 "state": "configuring", 00:44:49.168 "raid_level": "raid5f", 00:44:49.168 "superblock": true, 00:44:49.168 "num_base_bdevs": 4, 00:44:49.168 "num_base_bdevs_discovered": 3, 00:44:49.168 "num_base_bdevs_operational": 4, 00:44:49.168 "base_bdevs_list": [ 00:44:49.168 { 00:44:49.168 "name": "BaseBdev1", 00:44:49.168 "uuid": "4ca83578-c054-4938-be7d-b4e6e11527a6", 00:44:49.168 "is_configured": true, 00:44:49.168 "data_offset": 2048, 00:44:49.168 "data_size": 63488 00:44:49.168 }, 00:44:49.168 { 00:44:49.168 "name": "BaseBdev2", 00:44:49.168 "uuid": "010980be-f7cb-465d-a012-e4037d86fe1a", 00:44:49.168 "is_configured": true, 00:44:49.168 "data_offset": 2048, 00:44:49.168 "data_size": 63488 00:44:49.168 }, 00:44:49.168 { 00:44:49.168 "name": "BaseBdev3", 00:44:49.168 "uuid": "7e51deab-3346-4689-9539-6d333186f2b0", 00:44:49.168 "is_configured": true, 00:44:49.168 "data_offset": 2048, 00:44:49.168 "data_size": 63488 00:44:49.168 }, 00:44:49.168 { 00:44:49.168 "name": "BaseBdev4", 00:44:49.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:49.168 "is_configured": false, 00:44:49.168 "data_offset": 0, 00:44:49.168 "data_size": 0 00:44:49.168 } 00:44:49.168 ] 00:44:49.168 }' 00:44:49.168 16:21:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:49.168 16:21:53 -- common/autotest_common.sh@10 -- # set +x 00:44:49.425 16:21:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:44:49.683 [2024-07-22 16:21:53.796942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:44:49.683 [2024-07-22 16:21:53.797293] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:44:49.683 [2024-07-22 16:21:53.797312] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:49.683 [2024-07-22 16:21:53.797414] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:44:49.683 BaseBdev4 00:44:49.683 [2024-07-22 16:21:53.803822] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:44:49.683 [2024-07-22 16:21:53.804198] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:44:49.683 [2024-07-22 16:21:53.804803] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:49.683 16:21:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:44:49.683 16:21:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:44:49.683 16:21:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:44:49.683 16:21:53 -- common/autotest_common.sh@889 -- # local i 00:44:49.683 16:21:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:44:49.683 16:21:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:44:49.683 16:21:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:44:49.941 16:21:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:44:50.233 [ 00:44:50.233 { 00:44:50.233 "name": "BaseBdev4", 00:44:50.233 "aliases": [ 00:44:50.233 "eddae31d-afd1-487c-aec8-38833b11b1ac" 00:44:50.233 ], 00:44:50.233 "product_name": "Malloc disk", 00:44:50.233 "block_size": 512, 00:44:50.233 "num_blocks": 65536, 00:44:50.233 "uuid": "eddae31d-afd1-487c-aec8-38833b11b1ac", 00:44:50.233 "assigned_rate_limits": { 00:44:50.233 "rw_ios_per_sec": 0, 00:44:50.233 "rw_mbytes_per_sec": 0, 00:44:50.233 "r_mbytes_per_sec": 0, 00:44:50.233 "w_mbytes_per_sec": 0 00:44:50.233 }, 00:44:50.234 "claimed": true, 00:44:50.234 "claim_type": "exclusive_write", 00:44:50.234 "zoned": false, 00:44:50.234 "supported_io_types": { 00:44:50.234 "read": true, 00:44:50.234 "write": true, 00:44:50.234 "unmap": true, 00:44:50.234 "write_zeroes": true, 00:44:50.234 "flush": true, 00:44:50.234 "reset": true, 00:44:50.234 "compare": false, 00:44:50.234 "compare_and_write": false, 00:44:50.234 "abort": true, 00:44:50.234 "nvme_admin": false, 00:44:50.234 "nvme_io": false 00:44:50.234 }, 00:44:50.234 "memory_domains": [ 00:44:50.234 { 00:44:50.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:44:50.234 "dma_device_type": 2 00:44:50.234 } 00:44:50.234 ], 00:44:50.234 "driver_specific": {} 00:44:50.234 } 00:44:50.234 ] 00:44:50.234 16:21:54 -- common/autotest_common.sh@895 -- # return 0 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:50.234 16:21:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:50.519 16:21:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:50.519 "name": "Existed_Raid", 00:44:50.519 "uuid": "da349949-5f4c-4e7b-9c2b-9d8422b2e8db", 00:44:50.519 "strip_size_kb": 64, 00:44:50.519 "state": "online", 00:44:50.519 "raid_level": "raid5f", 00:44:50.519 "superblock": true, 00:44:50.519 "num_base_bdevs": 4, 00:44:50.519 "num_base_bdevs_discovered": 4, 00:44:50.519 "num_base_bdevs_operational": 4, 00:44:50.519 "base_bdevs_list": [ 00:44:50.519 { 00:44:50.519 "name": "BaseBdev1", 00:44:50.519 "uuid": "4ca83578-c054-4938-be7d-b4e6e11527a6", 00:44:50.519 "is_configured": true, 00:44:50.519 "data_offset": 2048, 00:44:50.519 "data_size": 63488 00:44:50.519 }, 00:44:50.519 { 00:44:50.519 "name": "BaseBdev2", 00:44:50.519 "uuid": "010980be-f7cb-465d-a012-e4037d86fe1a", 00:44:50.519 "is_configured": true, 00:44:50.519 "data_offset": 2048, 00:44:50.519 "data_size": 63488 00:44:50.519 }, 00:44:50.519 { 00:44:50.519 "name": "BaseBdev3", 00:44:50.519 "uuid": "7e51deab-3346-4689-9539-6d333186f2b0", 00:44:50.519 "is_configured": true, 00:44:50.519 "data_offset": 2048, 00:44:50.519 "data_size": 63488 00:44:50.519 }, 00:44:50.519 { 00:44:50.519 "name": "BaseBdev4", 00:44:50.519 "uuid": "eddae31d-afd1-487c-aec8-38833b11b1ac", 00:44:50.519 "is_configured": true, 00:44:50.519 "data_offset": 2048, 00:44:50.519 "data_size": 63488 00:44:50.519 } 00:44:50.519 ] 00:44:50.519 }' 00:44:50.519 16:21:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:50.519 16:21:54 -- common/autotest_common.sh@10 -- # set +x 00:44:50.777 16:21:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:44:51.036 [2024-07-22 16:21:55.132065] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@196 -- # return 0 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:44:51.036 16:21:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:51.294 16:21:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:51.294 "name": "Existed_Raid", 00:44:51.294 "uuid": "da349949-5f4c-4e7b-9c2b-9d8422b2e8db", 00:44:51.294 "strip_size_kb": 64, 00:44:51.294 "state": "online", 00:44:51.294 "raid_level": "raid5f", 00:44:51.294 "superblock": true, 00:44:51.294 "num_base_bdevs": 4, 00:44:51.294 "num_base_bdevs_discovered": 3, 00:44:51.294 "num_base_bdevs_operational": 3, 00:44:51.294 "base_bdevs_list": [ 00:44:51.294 { 00:44:51.294 "name": null, 00:44:51.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:44:51.294 "is_configured": false, 00:44:51.294 "data_offset": 2048, 00:44:51.294 "data_size": 63488 00:44:51.294 }, 00:44:51.294 { 00:44:51.294 "name": "BaseBdev2", 00:44:51.294 "uuid": "010980be-f7cb-465d-a012-e4037d86fe1a", 00:44:51.294 "is_configured": true, 00:44:51.294 "data_offset": 2048, 00:44:51.294 "data_size": 63488 00:44:51.294 }, 00:44:51.294 { 00:44:51.294 "name": "BaseBdev3", 00:44:51.294 "uuid": "7e51deab-3346-4689-9539-6d333186f2b0", 00:44:51.294 "is_configured": true, 00:44:51.294 "data_offset": 2048, 00:44:51.294 "data_size": 63488 00:44:51.294 }, 00:44:51.294 { 00:44:51.294 "name": "BaseBdev4", 00:44:51.294 "uuid": "eddae31d-afd1-487c-aec8-38833b11b1ac", 00:44:51.294 "is_configured": true, 00:44:51.294 "data_offset": 2048, 00:44:51.294 "data_size": 63488 00:44:51.294 } 00:44:51.294 ] 00:44:51.294 }' 00:44:51.294 16:21:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:51.294 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:44:51.552 16:21:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:44:51.552 16:21:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:51.552 16:21:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:51.810 16:21:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:52.069 16:21:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:52.069 16:21:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:52.069 16:21:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:44:52.327 [2024-07-22 16:21:56.361178] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:44:52.327 [2024-07-22 16:21:56.361477] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:52.327 [2024-07-22 16:21:56.361581] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:52.327 16:21:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:52.327 16:21:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:52.327 16:21:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:52.327 16:21:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:52.586 16:21:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:52.586 16:21:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:52.586 16:21:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:44:52.844 [2024-07-22 16:21:57.001213] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:44:53.101 16:21:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:53.101 16:21:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:53.101 16:21:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:53.101 16:21:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:44:53.359 16:21:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:44:53.359 16:21:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:44:53.359 16:21:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:44:53.616 [2024-07-22 16:21:57.642479] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:44:53.616 [2024-07-22 16:21:57.642614] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:44:53.616 16:21:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:44:53.616 16:21:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:44:53.616 16:21:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:53.616 16:21:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:44:53.874 16:21:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:44:53.874 16:21:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:44:53.874 16:21:58 -- bdev/bdev_raid.sh@287 -- # killprocess 86592 00:44:53.874 16:21:58 -- common/autotest_common.sh@926 -- # '[' -z 86592 ']' 00:44:53.874 16:21:58 -- common/autotest_common.sh@930 -- # kill -0 86592 00:44:53.874 16:21:58 -- common/autotest_common.sh@931 -- # uname 00:44:53.874 16:21:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:44:53.874 16:21:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86592 00:44:53.874 killing process with pid 86592 00:44:53.874 16:21:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:44:53.874 16:21:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:44:53.874 16:21:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86592' 00:44:53.874 16:21:58 -- common/autotest_common.sh@945 -- # kill 86592 00:44:53.874 16:21:58 -- common/autotest_common.sh@950 -- # wait 86592 00:44:53.874 [2024-07-22 16:21:58.085226] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:44:53.874 [2024-07-22 16:21:58.085379] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:44:55.247 00:44:55.247 real 0m15.237s 00:44:55.247 user 0m25.254s 00:44:55.247 sys 0m2.577s 00:44:55.247 ************************************ 00:44:55.247 END TEST raid5f_state_function_test_sb 00:44:55.247 ************************************ 00:44:55.247 16:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:55.247 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:44:55.247 16:21:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:44:55.247 16:21:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:44:55.247 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:44:55.247 ************************************ 00:44:55.247 START TEST raid5f_superblock_test 00:44:55.247 ************************************ 00:44:55.247 16:21:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=87013 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 87013 /var/tmp/spdk-raid.sock 00:44:55.247 16:21:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:44:55.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:44:55.247 16:21:59 -- common/autotest_common.sh@819 -- # '[' -z 87013 ']' 00:44:55.247 16:21:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:44:55.247 16:21:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:44:55.247 16:21:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:44:55.247 16:21:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:44:55.247 16:21:59 -- common/autotest_common.sh@10 -- # set +x 00:44:55.247 [2024-07-22 16:21:59.518127] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:44:55.247 [2024-07-22 16:21:59.518318] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87013 ] 00:44:55.505 [2024-07-22 16:21:59.695445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.763 [2024-07-22 16:21:59.965832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:56.021 [2024-07-22 16:22:00.186327] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:44:56.279 16:22:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:44:56.279 16:22:00 -- common/autotest_common.sh@852 -- # return 0 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:56.279 16:22:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:44:56.576 malloc1 00:44:56.576 16:22:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:44:56.836 [2024-07-22 16:22:01.041711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:44:56.836 [2024-07-22 16:22:01.041845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:56.836 [2024-07-22 16:22:01.041898] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:44:56.836 [2024-07-22 16:22:01.041917] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:56.836 [2024-07-22 16:22:01.044919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:56.836 [2024-07-22 16:22:01.044968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:44:56.836 pt1 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:56.836 16:22:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:44:57.094 malloc2 00:44:57.094 16:22:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:44:57.352 [2024-07-22 16:22:01.553765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:44:57.352 [2024-07-22 16:22:01.553877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:57.352 [2024-07-22 16:22:01.553918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:44:57.352 [2024-07-22 16:22:01.553936] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:57.352 [2024-07-22 16:22:01.556871] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:57.352 [2024-07-22 16:22:01.556916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:44:57.353 pt2 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:57.353 16:22:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:44:57.611 malloc3 00:44:57.611 16:22:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:44:57.869 [2024-07-22 16:22:02.043559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:44:57.869 [2024-07-22 16:22:02.043680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:57.869 [2024-07-22 16:22:02.043724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:44:57.869 [2024-07-22 16:22:02.043741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:57.869 [2024-07-22 16:22:02.046684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:57.869 [2024-07-22 16:22:02.046729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:44:57.869 pt3 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:44:57.869 16:22:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:44:57.870 16:22:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:44:57.870 16:22:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:44:57.870 16:22:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:44:58.128 malloc4 00:44:58.128 16:22:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:44:58.386 [2024-07-22 16:22:02.545596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:44:58.386 [2024-07-22 16:22:02.545699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:44:58.386 [2024-07-22 16:22:02.545750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:44:58.386 [2024-07-22 16:22:02.545767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:44:58.386 [2024-07-22 16:22:02.548762] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:44:58.386 [2024-07-22 16:22:02.548809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:44:58.386 pt4 00:44:58.386 16:22:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:44:58.386 16:22:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:44:58.386 16:22:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:44:58.644 [2024-07-22 16:22:02.797971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:44:58.645 [2024-07-22 16:22:02.800512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:44:58.645 [2024-07-22 16:22:02.800783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:44:58.645 [2024-07-22 16:22:02.800872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:44:58.645 [2024-07-22 16:22:02.801194] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:44:58.645 [2024-07-22 16:22:02.801214] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:44:58.645 [2024-07-22 16:22:02.801368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:44:58.645 [2024-07-22 16:22:02.808644] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:44:58.645 [2024-07-22 16:22:02.808682] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:44:58.645 [2024-07-22 16:22:02.808929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:58.645 16:22:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:44:58.905 16:22:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:44:58.905 "name": "raid_bdev1", 00:44:58.905 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:44:58.905 "strip_size_kb": 64, 00:44:58.905 "state": "online", 00:44:58.905 "raid_level": "raid5f", 00:44:58.905 "superblock": true, 00:44:58.905 "num_base_bdevs": 4, 00:44:58.905 "num_base_bdevs_discovered": 4, 00:44:58.905 "num_base_bdevs_operational": 4, 00:44:58.905 "base_bdevs_list": [ 00:44:58.905 { 00:44:58.905 "name": "pt1", 00:44:58.905 "uuid": "057868b4-aa16-5820-96b5-561d38cddb6f", 00:44:58.905 "is_configured": true, 00:44:58.905 "data_offset": 2048, 00:44:58.905 "data_size": 63488 00:44:58.905 }, 00:44:58.905 { 00:44:58.905 "name": "pt2", 00:44:58.905 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:44:58.905 "is_configured": true, 00:44:58.905 "data_offset": 2048, 00:44:58.905 "data_size": 63488 00:44:58.905 }, 00:44:58.905 { 00:44:58.905 "name": "pt3", 00:44:58.905 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:44:58.905 "is_configured": true, 00:44:58.905 "data_offset": 2048, 00:44:58.905 "data_size": 63488 00:44:58.905 }, 00:44:58.905 { 00:44:58.905 "name": "pt4", 00:44:58.905 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:44:58.905 "is_configured": true, 00:44:58.905 "data_offset": 2048, 00:44:58.905 "data_size": 63488 00:44:58.905 } 00:44:58.905 ] 00:44:58.905 }' 00:44:58.905 16:22:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:44:58.905 16:22:03 -- common/autotest_common.sh@10 -- # set +x 00:44:59.164 16:22:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:44:59.164 16:22:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:44:59.420 [2024-07-22 16:22:03.621319] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:44:59.420 16:22:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb 00:44:59.420 16:22:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb ']' 00:44:59.420 16:22:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:44:59.676 [2024-07-22 16:22:03.905055] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:44:59.676 [2024-07-22 16:22:03.905119] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:44:59.676 [2024-07-22 16:22:03.905243] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:44:59.676 [2024-07-22 16:22:03.905380] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:44:59.676 [2024-07-22 16:22:03.905399] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:44:59.676 16:22:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:44:59.676 16:22:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:45:00.241 16:22:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:45:00.498 16:22:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:45:00.498 16:22:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:45:01.061 16:22:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:45:01.061 16:22:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:45:01.318 16:22:05 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:45:01.318 16:22:05 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:45:01.576 16:22:05 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:45:01.576 16:22:05 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:45:01.576 16:22:05 -- common/autotest_common.sh@640 -- # local es=0 00:45:01.576 16:22:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:45:01.576 16:22:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:01.576 16:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:01.576 16:22:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:01.576 16:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:01.576 16:22:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:01.576 16:22:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:45:01.576 16:22:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:01.576 16:22:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:45:01.576 16:22:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:45:01.833 [2024-07-22 16:22:05.849441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:45:01.833 [2024-07-22 16:22:05.851955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:45:01.833 [2024-07-22 16:22:05.852200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:45:01.833 [2024-07-22 16:22:05.852292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:45:01.833 [2024-07-22 16:22:05.852375] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:45:01.833 [2024-07-22 16:22:05.852454] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:45:01.833 [2024-07-22 16:22:05.852492] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:45:01.833 [2024-07-22 16:22:05.852533] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:45:01.833 [2024-07-22 16:22:05.852561] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:01.833 [2024-07-22 16:22:05.852576] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:45:01.833 request: 00:45:01.833 { 00:45:01.833 "name": "raid_bdev1", 00:45:01.833 "raid_level": "raid5f", 00:45:01.833 "base_bdevs": [ 00:45:01.833 "malloc1", 00:45:01.833 "malloc2", 00:45:01.833 "malloc3", 00:45:01.833 "malloc4" 00:45:01.833 ], 00:45:01.833 "superblock": false, 00:45:01.833 "strip_size_kb": 64, 00:45:01.833 "method": "bdev_raid_create", 00:45:01.833 "req_id": 1 00:45:01.833 } 00:45:01.833 Got JSON-RPC error response 00:45:01.833 response: 00:45:01.833 { 00:45:01.833 "code": -17, 00:45:01.833 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:45:01.833 } 00:45:01.833 16:22:05 -- common/autotest_common.sh@643 -- # es=1 00:45:01.833 16:22:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:45:01.833 16:22:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:45:01.833 16:22:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:45:01.833 16:22:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:01.833 16:22:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:45:02.091 16:22:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:45:02.091 16:22:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:45:02.091 16:22:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:45:02.348 [2024-07-22 16:22:06.373571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:45:02.348 [2024-07-22 16:22:06.373708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:02.348 [2024-07-22 16:22:06.373748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:45:02.348 [2024-07-22 16:22:06.373764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:02.348 [2024-07-22 16:22:06.376909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:02.348 [2024-07-22 16:22:06.376956] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:45:02.348 [2024-07-22 16:22:06.377110] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:45:02.348 [2024-07-22 16:22:06.377200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:45:02.348 pt1 00:45:02.348 16:22:06 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:45:02.348 16:22:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:02.348 16:22:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:02.348 16:22:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:02.348 16:22:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:02.349 16:22:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:02.605 16:22:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:02.605 "name": "raid_bdev1", 00:45:02.605 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:02.605 "strip_size_kb": 64, 00:45:02.605 "state": "configuring", 00:45:02.605 "raid_level": "raid5f", 00:45:02.605 "superblock": true, 00:45:02.605 "num_base_bdevs": 4, 00:45:02.605 "num_base_bdevs_discovered": 1, 00:45:02.605 "num_base_bdevs_operational": 4, 00:45:02.605 "base_bdevs_list": [ 00:45:02.605 { 00:45:02.605 "name": "pt1", 00:45:02.605 "uuid": "057868b4-aa16-5820-96b5-561d38cddb6f", 00:45:02.605 "is_configured": true, 00:45:02.605 "data_offset": 2048, 00:45:02.605 "data_size": 63488 00:45:02.605 }, 00:45:02.605 { 00:45:02.605 "name": null, 00:45:02.605 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:02.605 "is_configured": false, 00:45:02.605 "data_offset": 2048, 00:45:02.605 "data_size": 63488 00:45:02.605 }, 00:45:02.605 { 00:45:02.605 "name": null, 00:45:02.605 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:02.605 "is_configured": false, 00:45:02.605 "data_offset": 2048, 00:45:02.605 "data_size": 63488 00:45:02.606 }, 00:45:02.606 { 00:45:02.606 "name": null, 00:45:02.606 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:02.606 "is_configured": false, 00:45:02.606 "data_offset": 2048, 00:45:02.606 "data_size": 63488 00:45:02.606 } 00:45:02.606 ] 00:45:02.606 }' 00:45:02.606 16:22:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:02.606 16:22:06 -- common/autotest_common.sh@10 -- # set +x 00:45:02.862 16:22:07 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:45:02.862 16:22:07 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:03.120 [2024-07-22 16:22:07.269931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:03.120 [2024-07-22 16:22:07.270118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:03.120 [2024-07-22 16:22:07.270187] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:45:03.120 [2024-07-22 16:22:07.270218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:03.120 [2024-07-22 16:22:07.271030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:03.120 [2024-07-22 16:22:07.271080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:03.120 [2024-07-22 16:22:07.271257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:45:03.120 [2024-07-22 16:22:07.271317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:03.120 pt2 00:45:03.120 16:22:07 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:45:03.378 [2024-07-22 16:22:07.537919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:03.378 16:22:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:03.636 16:22:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:03.636 "name": "raid_bdev1", 00:45:03.636 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:03.636 "strip_size_kb": 64, 00:45:03.636 "state": "configuring", 00:45:03.636 "raid_level": "raid5f", 00:45:03.636 "superblock": true, 00:45:03.636 "num_base_bdevs": 4, 00:45:03.636 "num_base_bdevs_discovered": 1, 00:45:03.636 "num_base_bdevs_operational": 4, 00:45:03.636 "base_bdevs_list": [ 00:45:03.636 { 00:45:03.636 "name": "pt1", 00:45:03.636 "uuid": "057868b4-aa16-5820-96b5-561d38cddb6f", 00:45:03.636 "is_configured": true, 00:45:03.636 "data_offset": 2048, 00:45:03.636 "data_size": 63488 00:45:03.636 }, 00:45:03.636 { 00:45:03.636 "name": null, 00:45:03.636 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:03.636 "is_configured": false, 00:45:03.636 "data_offset": 2048, 00:45:03.636 "data_size": 63488 00:45:03.636 }, 00:45:03.636 { 00:45:03.636 "name": null, 00:45:03.636 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:03.636 "is_configured": false, 00:45:03.636 "data_offset": 2048, 00:45:03.636 "data_size": 63488 00:45:03.636 }, 00:45:03.636 { 00:45:03.636 "name": null, 00:45:03.636 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:03.636 "is_configured": false, 00:45:03.636 "data_offset": 2048, 00:45:03.636 "data_size": 63488 00:45:03.636 } 00:45:03.636 ] 00:45:03.636 }' 00:45:03.636 16:22:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:03.636 16:22:07 -- common/autotest_common.sh@10 -- # set +x 00:45:03.893 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:45:03.893 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:45:03.893 16:22:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:04.151 [2024-07-22 16:22:08.366175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:04.151 [2024-07-22 16:22:08.366298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:04.151 [2024-07-22 16:22:08.366337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:45:04.151 [2024-07-22 16:22:08.366357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:04.151 [2024-07-22 16:22:08.366985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:04.151 [2024-07-22 16:22:08.367068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:04.151 [2024-07-22 16:22:08.367191] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:45:04.151 [2024-07-22 16:22:08.367234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:04.151 pt2 00:45:04.151 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:45:04.151 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:45:04.151 16:22:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:45:04.410 [2024-07-22 16:22:08.630347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:45:04.410 [2024-07-22 16:22:08.630461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:04.410 [2024-07-22 16:22:08.630499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:45:04.410 [2024-07-22 16:22:08.630519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:04.410 [2024-07-22 16:22:08.631150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:04.410 [2024-07-22 16:22:08.631185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:45:04.410 [2024-07-22 16:22:08.631302] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:45:04.410 [2024-07-22 16:22:08.631347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:45:04.410 pt3 00:45:04.410 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:45:04.410 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:45:04.410 16:22:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:45:04.669 [2024-07-22 16:22:08.874445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:45:04.669 [2024-07-22 16:22:08.874600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:04.669 [2024-07-22 16:22:08.874648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:45:04.669 [2024-07-22 16:22:08.874700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:04.669 [2024-07-22 16:22:08.875378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:04.669 [2024-07-22 16:22:08.875418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:45:04.669 [2024-07-22 16:22:08.875563] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:45:04.669 [2024-07-22 16:22:08.875610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:45:04.669 [2024-07-22 16:22:08.875805] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:45:04.669 [2024-07-22 16:22:08.875826] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:04.669 [2024-07-22 16:22:08.875938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:45:04.669 [2024-07-22 16:22:08.883354] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:45:04.669 [2024-07-22 16:22:08.883380] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:45:04.669 [2024-07-22 16:22:08.883593] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:04.669 pt4 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:04.669 16:22:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:04.927 16:22:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:04.927 "name": "raid_bdev1", 00:45:04.927 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:04.927 "strip_size_kb": 64, 00:45:04.927 "state": "online", 00:45:04.927 "raid_level": "raid5f", 00:45:04.927 "superblock": true, 00:45:04.927 "num_base_bdevs": 4, 00:45:04.927 "num_base_bdevs_discovered": 4, 00:45:04.927 "num_base_bdevs_operational": 4, 00:45:04.927 "base_bdevs_list": [ 00:45:04.927 { 00:45:04.927 "name": "pt1", 00:45:04.927 "uuid": "057868b4-aa16-5820-96b5-561d38cddb6f", 00:45:04.927 "is_configured": true, 00:45:04.927 "data_offset": 2048, 00:45:04.927 "data_size": 63488 00:45:04.927 }, 00:45:04.927 { 00:45:04.927 "name": "pt2", 00:45:04.927 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:04.927 "is_configured": true, 00:45:04.927 "data_offset": 2048, 00:45:04.927 "data_size": 63488 00:45:04.927 }, 00:45:04.927 { 00:45:04.927 "name": "pt3", 00:45:04.927 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:04.927 "is_configured": true, 00:45:04.927 "data_offset": 2048, 00:45:04.927 "data_size": 63488 00:45:04.927 }, 00:45:04.927 { 00:45:04.927 "name": "pt4", 00:45:04.927 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:04.927 "is_configured": true, 00:45:04.927 "data_offset": 2048, 00:45:04.927 "data_size": 63488 00:45:04.927 } 00:45:04.927 ] 00:45:04.927 }' 00:45:04.927 16:22:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:04.927 16:22:09 -- common/autotest_common.sh@10 -- # set +x 00:45:05.185 16:22:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:45:05.443 [2024-07-22 16:22:09.664290] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@430 -- # '[' 70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb '!=' 70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb ']' 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@196 -- # return 0 00:45:05.443 16:22:09 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:45:05.701 [2024-07-22 16:22:09.896289] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:05.701 16:22:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:06.268 16:22:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:06.268 "name": "raid_bdev1", 00:45:06.268 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:06.268 "strip_size_kb": 64, 00:45:06.268 "state": "online", 00:45:06.268 "raid_level": "raid5f", 00:45:06.268 "superblock": true, 00:45:06.268 "num_base_bdevs": 4, 00:45:06.268 "num_base_bdevs_discovered": 3, 00:45:06.268 "num_base_bdevs_operational": 3, 00:45:06.268 "base_bdevs_list": [ 00:45:06.268 { 00:45:06.268 "name": null, 00:45:06.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:06.268 "is_configured": false, 00:45:06.268 "data_offset": 2048, 00:45:06.268 "data_size": 63488 00:45:06.268 }, 00:45:06.268 { 00:45:06.268 "name": "pt2", 00:45:06.268 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:06.268 "is_configured": true, 00:45:06.268 "data_offset": 2048, 00:45:06.268 "data_size": 63488 00:45:06.268 }, 00:45:06.268 { 00:45:06.268 "name": "pt3", 00:45:06.268 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:06.268 "is_configured": true, 00:45:06.268 "data_offset": 2048, 00:45:06.268 "data_size": 63488 00:45:06.268 }, 00:45:06.268 { 00:45:06.268 "name": "pt4", 00:45:06.268 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:06.268 "is_configured": true, 00:45:06.268 "data_offset": 2048, 00:45:06.268 "data_size": 63488 00:45:06.268 } 00:45:06.268 ] 00:45:06.268 }' 00:45:06.268 16:22:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:06.268 16:22:10 -- common/autotest_common.sh@10 -- # set +x 00:45:06.525 16:22:10 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:45:06.783 [2024-07-22 16:22:10.944542] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:06.783 [2024-07-22 16:22:10.944629] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:06.783 [2024-07-22 16:22:10.944732] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:06.783 [2024-07-22 16:22:10.944853] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:06.783 [2024-07-22 16:22:10.944873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:45:06.783 16:22:10 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:06.783 16:22:10 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:45:07.040 16:22:11 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:45:07.040 16:22:11 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:45:07.040 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:45:07.040 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:45:07.040 16:22:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:45:07.298 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:45:07.298 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:45:07.298 16:22:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:45:07.555 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:45:07.555 16:22:11 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:45:07.555 16:22:11 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:45:07.836 16:22:12 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:45:07.836 16:22:12 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:45:07.836 16:22:12 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:45:07.836 16:22:12 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:45:07.836 16:22:12 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:08.094 [2024-07-22 16:22:12.304838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:08.094 [2024-07-22 16:22:12.305220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:08.094 [2024-07-22 16:22:12.305279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:45:08.094 [2024-07-22 16:22:12.305298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:08.094 [2024-07-22 16:22:12.308367] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:08.094 pt2 00:45:08.094 [2024-07-22 16:22:12.308559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:08.094 [2024-07-22 16:22:12.308730] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:45:08.094 [2024-07-22 16:22:12.308799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:08.094 16:22:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:08.351 16:22:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:08.351 "name": "raid_bdev1", 00:45:08.351 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:08.351 "strip_size_kb": 64, 00:45:08.351 "state": "configuring", 00:45:08.351 "raid_level": "raid5f", 00:45:08.351 "superblock": true, 00:45:08.351 "num_base_bdevs": 4, 00:45:08.351 "num_base_bdevs_discovered": 1, 00:45:08.351 "num_base_bdevs_operational": 3, 00:45:08.351 "base_bdevs_list": [ 00:45:08.351 { 00:45:08.351 "name": null, 00:45:08.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:08.351 "is_configured": false, 00:45:08.351 "data_offset": 2048, 00:45:08.351 "data_size": 63488 00:45:08.351 }, 00:45:08.351 { 00:45:08.351 "name": "pt2", 00:45:08.351 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:08.351 "is_configured": true, 00:45:08.351 "data_offset": 2048, 00:45:08.351 "data_size": 63488 00:45:08.351 }, 00:45:08.351 { 00:45:08.351 "name": null, 00:45:08.351 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:08.351 "is_configured": false, 00:45:08.351 "data_offset": 2048, 00:45:08.351 "data_size": 63488 00:45:08.351 }, 00:45:08.351 { 00:45:08.351 "name": null, 00:45:08.351 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:08.351 "is_configured": false, 00:45:08.351 "data_offset": 2048, 00:45:08.351 "data_size": 63488 00:45:08.351 } 00:45:08.351 ] 00:45:08.351 }' 00:45:08.351 16:22:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:08.351 16:22:12 -- common/autotest_common.sh@10 -- # set +x 00:45:08.609 16:22:12 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:45:08.609 16:22:12 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:45:08.609 16:22:12 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:45:08.868 [2024-07-22 16:22:13.141099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:45:08.868 [2024-07-22 16:22:13.141197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:08.868 [2024-07-22 16:22:13.141239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:45:08.868 [2024-07-22 16:22:13.141256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:08.868 [2024-07-22 16:22:13.141871] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:08.868 [2024-07-22 16:22:13.141908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:45:08.868 [2024-07-22 16:22:13.142055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:45:08.868 [2024-07-22 16:22:13.142105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:45:09.126 pt3 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:09.126 16:22:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:09.384 16:22:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:09.384 "name": "raid_bdev1", 00:45:09.384 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:09.385 "strip_size_kb": 64, 00:45:09.385 "state": "configuring", 00:45:09.385 "raid_level": "raid5f", 00:45:09.385 "superblock": true, 00:45:09.385 "num_base_bdevs": 4, 00:45:09.385 "num_base_bdevs_discovered": 2, 00:45:09.385 "num_base_bdevs_operational": 3, 00:45:09.385 "base_bdevs_list": [ 00:45:09.385 { 00:45:09.385 "name": null, 00:45:09.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:09.385 "is_configured": false, 00:45:09.385 "data_offset": 2048, 00:45:09.385 "data_size": 63488 00:45:09.385 }, 00:45:09.385 { 00:45:09.385 "name": "pt2", 00:45:09.385 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:09.385 "is_configured": true, 00:45:09.385 "data_offset": 2048, 00:45:09.385 "data_size": 63488 00:45:09.385 }, 00:45:09.385 { 00:45:09.385 "name": "pt3", 00:45:09.385 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:09.385 "is_configured": true, 00:45:09.385 "data_offset": 2048, 00:45:09.385 "data_size": 63488 00:45:09.385 }, 00:45:09.385 { 00:45:09.385 "name": null, 00:45:09.385 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:09.385 "is_configured": false, 00:45:09.385 "data_offset": 2048, 00:45:09.385 "data_size": 63488 00:45:09.385 } 00:45:09.385 ] 00:45:09.385 }' 00:45:09.385 16:22:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:09.385 16:22:13 -- common/autotest_common.sh@10 -- # set +x 00:45:09.643 16:22:13 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:45:09.643 16:22:13 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:45:09.643 16:22:13 -- bdev/bdev_raid.sh@462 -- # i=3 00:45:09.643 16:22:13 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:45:09.900 [2024-07-22 16:22:14.049356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:45:09.900 [2024-07-22 16:22:14.049471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:09.900 [2024-07-22 16:22:14.049520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:45:09.900 [2024-07-22 16:22:14.049549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:09.900 [2024-07-22 16:22:14.050196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:09.901 [2024-07-22 16:22:14.050223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:45:09.901 [2024-07-22 16:22:14.050347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:45:09.901 [2024-07-22 16:22:14.050381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:45:09.901 [2024-07-22 16:22:14.050563] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:45:09.901 [2024-07-22 16:22:14.050579] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:09.901 [2024-07-22 16:22:14.050698] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:45:09.901 [2024-07-22 16:22:14.057649] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:45:09.901 [2024-07-22 16:22:14.057730] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:45:09.901 [2024-07-22 16:22:14.058184] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:09.901 pt4 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:09.901 16:22:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:10.159 16:22:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:10.159 "name": "raid_bdev1", 00:45:10.159 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:10.159 "strip_size_kb": 64, 00:45:10.159 "state": "online", 00:45:10.159 "raid_level": "raid5f", 00:45:10.159 "superblock": true, 00:45:10.159 "num_base_bdevs": 4, 00:45:10.159 "num_base_bdevs_discovered": 3, 00:45:10.159 "num_base_bdevs_operational": 3, 00:45:10.159 "base_bdevs_list": [ 00:45:10.159 { 00:45:10.159 "name": null, 00:45:10.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:10.159 "is_configured": false, 00:45:10.159 "data_offset": 2048, 00:45:10.159 "data_size": 63488 00:45:10.159 }, 00:45:10.159 { 00:45:10.159 "name": "pt2", 00:45:10.159 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:10.159 "is_configured": true, 00:45:10.159 "data_offset": 2048, 00:45:10.159 "data_size": 63488 00:45:10.159 }, 00:45:10.159 { 00:45:10.159 "name": "pt3", 00:45:10.159 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:10.159 "is_configured": true, 00:45:10.159 "data_offset": 2048, 00:45:10.159 "data_size": 63488 00:45:10.159 }, 00:45:10.159 { 00:45:10.159 "name": "pt4", 00:45:10.159 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:10.159 "is_configured": true, 00:45:10.159 "data_offset": 2048, 00:45:10.159 "data_size": 63488 00:45:10.159 } 00:45:10.159 ] 00:45:10.159 }' 00:45:10.159 16:22:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:10.159 16:22:14 -- common/autotest_common.sh@10 -- # set +x 00:45:10.732 16:22:14 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:45:10.732 16:22:14 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:45:10.732 [2024-07-22 16:22:14.970253] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:10.732 [2024-07-22 16:22:14.970322] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:10.732 [2024-07-22 16:22:14.970437] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:10.732 [2024-07-22 16:22:14.970544] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:10.732 [2024-07-22 16:22:14.970570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:45:10.732 16:22:14 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:10.732 16:22:14 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:45:11.005 16:22:15 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:45:11.005 16:22:15 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:45:11.005 16:22:15 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:45:11.261 [2024-07-22 16:22:15.454401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:45:11.261 [2024-07-22 16:22:15.454522] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:11.261 [2024-07-22 16:22:15.454563] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:45:11.261 [2024-07-22 16:22:15.454587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:11.261 [2024-07-22 16:22:15.457450] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:11.261 [2024-07-22 16:22:15.457503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:45:11.261 [2024-07-22 16:22:15.457629] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:45:11.261 [2024-07-22 16:22:15.457718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:45:11.261 pt1 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:11.261 16:22:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:11.262 16:22:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:11.262 16:22:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:11.262 16:22:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:11.262 16:22:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:11.262 16:22:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:11.518 16:22:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:11.518 "name": "raid_bdev1", 00:45:11.518 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:11.518 "strip_size_kb": 64, 00:45:11.518 "state": "configuring", 00:45:11.518 "raid_level": "raid5f", 00:45:11.518 "superblock": true, 00:45:11.518 "num_base_bdevs": 4, 00:45:11.518 "num_base_bdevs_discovered": 1, 00:45:11.518 "num_base_bdevs_operational": 4, 00:45:11.518 "base_bdevs_list": [ 00:45:11.518 { 00:45:11.518 "name": "pt1", 00:45:11.518 "uuid": "057868b4-aa16-5820-96b5-561d38cddb6f", 00:45:11.518 "is_configured": true, 00:45:11.518 "data_offset": 2048, 00:45:11.518 "data_size": 63488 00:45:11.518 }, 00:45:11.518 { 00:45:11.518 "name": null, 00:45:11.518 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:11.518 "is_configured": false, 00:45:11.518 "data_offset": 2048, 00:45:11.518 "data_size": 63488 00:45:11.518 }, 00:45:11.518 { 00:45:11.518 "name": null, 00:45:11.518 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:11.518 "is_configured": false, 00:45:11.518 "data_offset": 2048, 00:45:11.518 "data_size": 63488 00:45:11.518 }, 00:45:11.518 { 00:45:11.518 "name": null, 00:45:11.518 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:11.518 "is_configured": false, 00:45:11.518 "data_offset": 2048, 00:45:11.518 "data_size": 63488 00:45:11.518 } 00:45:11.518 ] 00:45:11.518 }' 00:45:11.518 16:22:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:11.518 16:22:15 -- common/autotest_common.sh@10 -- # set +x 00:45:11.776 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:45:11.776 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:45:11.776 16:22:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:45:12.342 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:45:12.342 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:45:12.342 16:22:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:45:12.342 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:45:12.342 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:45:12.343 16:22:16 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:45:12.601 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:45:12.601 16:22:16 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:45:12.601 16:22:16 -- bdev/bdev_raid.sh@489 -- # i=3 00:45:12.601 16:22:16 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:45:12.865 [2024-07-22 16:22:17.086875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:45:12.865 [2024-07-22 16:22:17.086983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:12.865 [2024-07-22 16:22:17.087047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:45:12.865 [2024-07-22 16:22:17.087074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:12.865 [2024-07-22 16:22:17.087675] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:12.865 [2024-07-22 16:22:17.087721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:45:12.865 [2024-07-22 16:22:17.087859] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:45:12.865 [2024-07-22 16:22:17.087891] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:45:12.865 [2024-07-22 16:22:17.087905] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:12.865 [2024-07-22 16:22:17.087938] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:45:12.865 [2024-07-22 16:22:17.088053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:45:12.865 pt4 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:12.865 16:22:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:13.123 16:22:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:13.123 "name": "raid_bdev1", 00:45:13.123 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:13.123 "strip_size_kb": 64, 00:45:13.123 "state": "configuring", 00:45:13.123 "raid_level": "raid5f", 00:45:13.123 "superblock": true, 00:45:13.123 "num_base_bdevs": 4, 00:45:13.123 "num_base_bdevs_discovered": 1, 00:45:13.123 "num_base_bdevs_operational": 3, 00:45:13.123 "base_bdevs_list": [ 00:45:13.123 { 00:45:13.123 "name": null, 00:45:13.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:13.123 "is_configured": false, 00:45:13.123 "data_offset": 2048, 00:45:13.123 "data_size": 63488 00:45:13.123 }, 00:45:13.123 { 00:45:13.123 "name": null, 00:45:13.123 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:13.123 "is_configured": false, 00:45:13.123 "data_offset": 2048, 00:45:13.123 "data_size": 63488 00:45:13.123 }, 00:45:13.123 { 00:45:13.123 "name": null, 00:45:13.123 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:13.123 "is_configured": false, 00:45:13.123 "data_offset": 2048, 00:45:13.123 "data_size": 63488 00:45:13.124 }, 00:45:13.124 { 00:45:13.124 "name": "pt4", 00:45:13.124 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:13.124 "is_configured": true, 00:45:13.124 "data_offset": 2048, 00:45:13.124 "data_size": 63488 00:45:13.124 } 00:45:13.124 ] 00:45:13.124 }' 00:45:13.124 16:22:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:13.124 16:22:17 -- common/autotest_common.sh@10 -- # set +x 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:45:13.689 [2024-07-22 16:22:17.929542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:45:13.689 [2024-07-22 16:22:17.929654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:13.689 [2024-07-22 16:22:17.929697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:45:13.689 [2024-07-22 16:22:17.929714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:13.689 [2024-07-22 16:22:17.930290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:13.689 [2024-07-22 16:22:17.930317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:45:13.689 [2024-07-22 16:22:17.930431] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:45:13.689 [2024-07-22 16:22:17.930469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:45:13.689 pt2 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:45:13.689 16:22:17 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:45:14.256 [2024-07-22 16:22:18.221789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:45:14.256 [2024-07-22 16:22:18.221886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:14.256 [2024-07-22 16:22:18.221937] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:45:14.256 [2024-07-22 16:22:18.221953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:14.256 [2024-07-22 16:22:18.222730] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:14.256 [2024-07-22 16:22:18.222765] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:45:14.256 [2024-07-22 16:22:18.222880] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:45:14.256 [2024-07-22 16:22:18.222911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:45:14.256 [2024-07-22 16:22:18.223429] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:45:14.256 [2024-07-22 16:22:18.223454] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:14.256 [2024-07-22 16:22:18.223567] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:45:14.256 [2024-07-22 16:22:18.230546] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:45:14.256 [2024-07-22 16:22:18.230581] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:45:14.256 [2024-07-22 16:22:18.230909] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:14.256 pt3 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:14.256 16:22:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:14.514 16:22:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:14.514 "name": "raid_bdev1", 00:45:14.514 "uuid": "70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb", 00:45:14.514 "strip_size_kb": 64, 00:45:14.514 "state": "online", 00:45:14.514 "raid_level": "raid5f", 00:45:14.514 "superblock": true, 00:45:14.514 "num_base_bdevs": 4, 00:45:14.514 "num_base_bdevs_discovered": 3, 00:45:14.514 "num_base_bdevs_operational": 3, 00:45:14.514 "base_bdevs_list": [ 00:45:14.514 { 00:45:14.514 "name": null, 00:45:14.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:14.514 "is_configured": false, 00:45:14.514 "data_offset": 2048, 00:45:14.514 "data_size": 63488 00:45:14.514 }, 00:45:14.514 { 00:45:14.514 "name": "pt2", 00:45:14.514 "uuid": "b4591ed9-4753-57dc-9750-1585a38dab45", 00:45:14.514 "is_configured": true, 00:45:14.514 "data_offset": 2048, 00:45:14.514 "data_size": 63488 00:45:14.514 }, 00:45:14.514 { 00:45:14.514 "name": "pt3", 00:45:14.514 "uuid": "103a4568-2f71-538e-a3af-a54ce241261d", 00:45:14.514 "is_configured": true, 00:45:14.514 "data_offset": 2048, 00:45:14.514 "data_size": 63488 00:45:14.514 }, 00:45:14.514 { 00:45:14.514 "name": "pt4", 00:45:14.514 "uuid": "77e76f67-e7c7-52ad-bbc8-68aee32f9dad", 00:45:14.514 "is_configured": true, 00:45:14.514 "data_offset": 2048, 00:45:14.514 "data_size": 63488 00:45:14.514 } 00:45:14.514 ] 00:45:14.514 }' 00:45:14.514 16:22:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:14.514 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:45:14.772 16:22:18 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:45:14.772 16:22:18 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:45:15.031 [2024-07-22 16:22:19.095858] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:15.031 16:22:19 -- bdev/bdev_raid.sh@506 -- # '[' 70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb '!=' 70c7d9c0-7575-4cd3-82ad-01ad3bba3cfb ']' 00:45:15.031 16:22:19 -- bdev/bdev_raid.sh@511 -- # killprocess 87013 00:45:15.031 16:22:19 -- common/autotest_common.sh@926 -- # '[' -z 87013 ']' 00:45:15.031 16:22:19 -- common/autotest_common.sh@930 -- # kill -0 87013 00:45:15.031 16:22:19 -- common/autotest_common.sh@931 -- # uname 00:45:15.031 16:22:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:15.031 16:22:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87013 00:45:15.031 killing process with pid 87013 00:45:15.031 16:22:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:15.031 16:22:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:15.031 16:22:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87013' 00:45:15.031 16:22:19 -- common/autotest_common.sh@945 -- # kill 87013 00:45:15.031 16:22:19 -- common/autotest_common.sh@950 -- # wait 87013 00:45:15.031 [2024-07-22 16:22:19.158543] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:15.031 [2024-07-22 16:22:19.158666] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:15.031 [2024-07-22 16:22:19.158769] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:15.031 [2024-07-22 16:22:19.159102] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:45:15.289 [2024-07-22 16:22:19.530093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:45:16.662 00:45:16.662 real 0m21.407s 00:45:16.662 user 0m36.849s 00:45:16.662 sys 0m3.383s 00:45:16.662 16:22:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:16.662 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:45:16.662 ************************************ 00:45:16.662 END TEST raid5f_superblock_test 00:45:16.662 ************************************ 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:45:16.662 16:22:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:45:16.662 16:22:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:16.662 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:45:16.662 ************************************ 00:45:16.662 START TEST raid5f_rebuild_test 00:45:16.662 ************************************ 00:45:16.662 16:22:20 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=87644 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 87644 /var/tmp/spdk-raid.sock 00:45:16.662 16:22:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:45:16.662 16:22:20 -- common/autotest_common.sh@819 -- # '[' -z 87644 ']' 00:45:16.663 16:22:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:45:16.663 16:22:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:16.663 16:22:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:45:16.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:45:16.663 16:22:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:16.663 16:22:20 -- common/autotest_common.sh@10 -- # set +x 00:45:16.921 [2024-07-22 16:22:20.985266] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:45:16.921 [2024-07-22 16:22:20.985607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefI/O size of 3145728 is greater than zero copy threshold (65536). 00:45:16.921 Zero copy mechanism will not be used. 00:45:16.921 ix=spdk_pid87644 ] 00:45:16.921 [2024-07-22 16:22:21.160673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.179 [2024-07-22 16:22:21.434594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.437 [2024-07-22 16:22:21.651475] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:18.003 16:22:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:18.003 16:22:21 -- common/autotest_common.sh@852 -- # return 0 00:45:18.003 16:22:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:18.003 16:22:21 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:45:18.003 16:22:21 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:45:18.003 BaseBdev1 00:45:18.262 16:22:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:18.262 16:22:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:45:18.262 16:22:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:45:18.531 BaseBdev2 00:45:18.531 16:22:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:18.531 16:22:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:45:18.531 16:22:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:45:18.803 BaseBdev3 00:45:18.803 16:22:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:18.803 16:22:22 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:45:18.803 16:22:22 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:45:19.061 BaseBdev4 00:45:19.061 16:22:23 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:45:19.321 spare_malloc 00:45:19.321 16:22:23 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:45:19.579 spare_delay 00:45:19.580 16:22:23 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:45:19.838 [2024-07-22 16:22:23.915116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:19.838 [2024-07-22 16:22:23.915232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:19.838 [2024-07-22 16:22:23.915270] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:45:19.838 [2024-07-22 16:22:23.915291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:19.838 [2024-07-22 16:22:23.918277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:19.838 [2024-07-22 16:22:23.918333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:19.838 spare 00:45:19.838 16:22:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:45:20.096 [2024-07-22 16:22:24.155361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:20.096 [2024-07-22 16:22:24.157737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:20.096 [2024-07-22 16:22:24.157823] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:45:20.096 [2024-07-22 16:22:24.157884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:45:20.096 [2024-07-22 16:22:24.157994] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:45:20.096 [2024-07-22 16:22:24.158055] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:45:20.096 [2024-07-22 16:22:24.158226] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:45:20.096 [2024-07-22 16:22:24.165290] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:45:20.096 [2024-07-22 16:22:24.165324] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:45:20.096 [2024-07-22 16:22:24.165638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:20.096 16:22:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:20.354 16:22:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:20.354 "name": "raid_bdev1", 00:45:20.354 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:20.354 "strip_size_kb": 64, 00:45:20.354 "state": "online", 00:45:20.354 "raid_level": "raid5f", 00:45:20.354 "superblock": false, 00:45:20.354 "num_base_bdevs": 4, 00:45:20.354 "num_base_bdevs_discovered": 4, 00:45:20.354 "num_base_bdevs_operational": 4, 00:45:20.354 "base_bdevs_list": [ 00:45:20.354 { 00:45:20.354 "name": "BaseBdev1", 00:45:20.354 "uuid": "9e7f7f78-7471-4101-aa9f-226b8be0dd61", 00:45:20.355 "is_configured": true, 00:45:20.355 "data_offset": 0, 00:45:20.355 "data_size": 65536 00:45:20.355 }, 00:45:20.355 { 00:45:20.355 "name": "BaseBdev2", 00:45:20.355 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:20.355 "is_configured": true, 00:45:20.355 "data_offset": 0, 00:45:20.355 "data_size": 65536 00:45:20.355 }, 00:45:20.355 { 00:45:20.355 "name": "BaseBdev3", 00:45:20.355 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:20.355 "is_configured": true, 00:45:20.355 "data_offset": 0, 00:45:20.355 "data_size": 65536 00:45:20.355 }, 00:45:20.355 { 00:45:20.355 "name": "BaseBdev4", 00:45:20.355 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:20.355 "is_configured": true, 00:45:20.355 "data_offset": 0, 00:45:20.355 "data_size": 65536 00:45:20.355 } 00:45:20.355 ] 00:45:20.355 }' 00:45:20.355 16:22:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:20.355 16:22:24 -- common/autotest_common.sh@10 -- # set +x 00:45:20.613 16:22:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:45:20.613 16:22:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:45:20.871 [2024-07-22 16:22:25.025992] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:20.871 16:22:25 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:45:20.871 16:22:25 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:20.871 16:22:25 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:45:21.129 16:22:25 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:45:21.129 16:22:25 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:45:21.129 16:22:25 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:45:21.129 16:22:25 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@12 -- # local i 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:21.129 16:22:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:45:21.404 [2024-07-22 16:22:25.534003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:45:21.404 /dev/nbd0 00:45:21.404 16:22:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:21.404 16:22:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:21.404 16:22:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:45:21.404 16:22:25 -- common/autotest_common.sh@857 -- # local i 00:45:21.404 16:22:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:45:21.404 16:22:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:45:21.404 16:22:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:45:21.404 16:22:25 -- common/autotest_common.sh@861 -- # break 00:45:21.404 16:22:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:45:21.404 16:22:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:45:21.404 16:22:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:21.404 1+0 records in 00:45:21.404 1+0 records out 00:45:21.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304913 s, 13.4 MB/s 00:45:21.404 16:22:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.404 16:22:25 -- common/autotest_common.sh@874 -- # size=4096 00:45:21.404 16:22:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:21.404 16:22:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:45:21.404 16:22:25 -- common/autotest_common.sh@877 -- # return 0 00:45:21.404 16:22:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:21.404 16:22:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:21.404 16:22:25 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:45:21.404 16:22:25 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:45:21.404 16:22:25 -- bdev/bdev_raid.sh@582 -- # echo 192 00:45:21.404 16:22:25 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:45:21.971 512+0 records in 00:45:21.971 512+0 records out 00:45:21.971 100663296 bytes (101 MB, 96 MiB) copied, 0.629533 s, 160 MB/s 00:45:21.971 16:22:26 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@51 -- # local i 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:21.971 16:22:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:45:22.228 16:22:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:22.228 [2024-07-22 16:22:26.502486] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@41 -- # break 00:45:22.486 16:22:26 -- bdev/nbd_common.sh@45 -- # return 0 00:45:22.486 16:22:26 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:45:22.486 [2024-07-22 16:22:26.750736] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:22.745 16:22:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:23.003 16:22:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:23.003 "name": "raid_bdev1", 00:45:23.003 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:23.003 "strip_size_kb": 64, 00:45:23.003 "state": "online", 00:45:23.003 "raid_level": "raid5f", 00:45:23.003 "superblock": false, 00:45:23.003 "num_base_bdevs": 4, 00:45:23.003 "num_base_bdevs_discovered": 3, 00:45:23.003 "num_base_bdevs_operational": 3, 00:45:23.003 "base_bdevs_list": [ 00:45:23.003 { 00:45:23.003 "name": null, 00:45:23.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:23.003 "is_configured": false, 00:45:23.003 "data_offset": 0, 00:45:23.003 "data_size": 65536 00:45:23.004 }, 00:45:23.004 { 00:45:23.004 "name": "BaseBdev2", 00:45:23.004 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:23.004 "is_configured": true, 00:45:23.004 "data_offset": 0, 00:45:23.004 "data_size": 65536 00:45:23.004 }, 00:45:23.004 { 00:45:23.004 "name": "BaseBdev3", 00:45:23.004 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:23.004 "is_configured": true, 00:45:23.004 "data_offset": 0, 00:45:23.004 "data_size": 65536 00:45:23.004 }, 00:45:23.004 { 00:45:23.004 "name": "BaseBdev4", 00:45:23.004 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:23.004 "is_configured": true, 00:45:23.004 "data_offset": 0, 00:45:23.004 "data_size": 65536 00:45:23.004 } 00:45:23.004 ] 00:45:23.004 }' 00:45:23.004 16:22:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:23.004 16:22:27 -- common/autotest_common.sh@10 -- # set +x 00:45:23.262 16:22:27 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:45:23.262 [2024-07-22 16:22:27.510905] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:45:23.262 [2024-07-22 16:22:27.511042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:23.262 [2024-07-22 16:22:27.525682] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:45:23.518 [2024-07-22 16:22:27.535644] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:23.518 16:22:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:24.449 16:22:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:24.706 "name": "raid_bdev1", 00:45:24.706 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:24.706 "strip_size_kb": 64, 00:45:24.706 "state": "online", 00:45:24.706 "raid_level": "raid5f", 00:45:24.706 "superblock": false, 00:45:24.706 "num_base_bdevs": 4, 00:45:24.706 "num_base_bdevs_discovered": 4, 00:45:24.706 "num_base_bdevs_operational": 4, 00:45:24.706 "process": { 00:45:24.706 "type": "rebuild", 00:45:24.706 "target": "spare", 00:45:24.706 "progress": { 00:45:24.706 "blocks": 23040, 00:45:24.706 "percent": 11 00:45:24.706 } 00:45:24.706 }, 00:45:24.706 "base_bdevs_list": [ 00:45:24.706 { 00:45:24.706 "name": "spare", 00:45:24.706 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:24.706 "is_configured": true, 00:45:24.706 "data_offset": 0, 00:45:24.706 "data_size": 65536 00:45:24.706 }, 00:45:24.706 { 00:45:24.706 "name": "BaseBdev2", 00:45:24.706 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:24.706 "is_configured": true, 00:45:24.706 "data_offset": 0, 00:45:24.706 "data_size": 65536 00:45:24.706 }, 00:45:24.706 { 00:45:24.706 "name": "BaseBdev3", 00:45:24.706 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:24.706 "is_configured": true, 00:45:24.706 "data_offset": 0, 00:45:24.706 "data_size": 65536 00:45:24.706 }, 00:45:24.706 { 00:45:24.706 "name": "BaseBdev4", 00:45:24.706 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:24.706 "is_configured": true, 00:45:24.706 "data_offset": 0, 00:45:24.706 "data_size": 65536 00:45:24.706 } 00:45:24.706 ] 00:45:24.706 }' 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:24.706 16:22:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:45:24.964 [2024-07-22 16:22:29.009290] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:24.964 [2024-07-22 16:22:29.052713] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:45:24.964 [2024-07-22 16:22:29.052878] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:24.964 16:22:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:25.221 16:22:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:25.221 "name": "raid_bdev1", 00:45:25.221 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:25.221 "strip_size_kb": 64, 00:45:25.221 "state": "online", 00:45:25.221 "raid_level": "raid5f", 00:45:25.221 "superblock": false, 00:45:25.221 "num_base_bdevs": 4, 00:45:25.221 "num_base_bdevs_discovered": 3, 00:45:25.221 "num_base_bdevs_operational": 3, 00:45:25.221 "base_bdevs_list": [ 00:45:25.221 { 00:45:25.221 "name": null, 00:45:25.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:25.221 "is_configured": false, 00:45:25.221 "data_offset": 0, 00:45:25.221 "data_size": 65536 00:45:25.221 }, 00:45:25.221 { 00:45:25.221 "name": "BaseBdev2", 00:45:25.221 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:25.221 "is_configured": true, 00:45:25.221 "data_offset": 0, 00:45:25.221 "data_size": 65536 00:45:25.221 }, 00:45:25.221 { 00:45:25.221 "name": "BaseBdev3", 00:45:25.221 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:25.221 "is_configured": true, 00:45:25.221 "data_offset": 0, 00:45:25.221 "data_size": 65536 00:45:25.221 }, 00:45:25.221 { 00:45:25.221 "name": "BaseBdev4", 00:45:25.221 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:25.221 "is_configured": true, 00:45:25.221 "data_offset": 0, 00:45:25.221 "data_size": 65536 00:45:25.221 } 00:45:25.221 ] 00:45:25.221 }' 00:45:25.221 16:22:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:25.221 16:22:29 -- common/autotest_common.sh@10 -- # set +x 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:25.478 16:22:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:25.736 "name": "raid_bdev1", 00:45:25.736 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:25.736 "strip_size_kb": 64, 00:45:25.736 "state": "online", 00:45:25.736 "raid_level": "raid5f", 00:45:25.736 "superblock": false, 00:45:25.736 "num_base_bdevs": 4, 00:45:25.736 "num_base_bdevs_discovered": 3, 00:45:25.736 "num_base_bdevs_operational": 3, 00:45:25.736 "base_bdevs_list": [ 00:45:25.736 { 00:45:25.736 "name": null, 00:45:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:25.736 "is_configured": false, 00:45:25.736 "data_offset": 0, 00:45:25.736 "data_size": 65536 00:45:25.736 }, 00:45:25.736 { 00:45:25.736 "name": "BaseBdev2", 00:45:25.736 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:25.736 "is_configured": true, 00:45:25.736 "data_offset": 0, 00:45:25.736 "data_size": 65536 00:45:25.736 }, 00:45:25.736 { 00:45:25.736 "name": "BaseBdev3", 00:45:25.736 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:25.736 "is_configured": true, 00:45:25.736 "data_offset": 0, 00:45:25.736 "data_size": 65536 00:45:25.736 }, 00:45:25.736 { 00:45:25.736 "name": "BaseBdev4", 00:45:25.736 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:25.736 "is_configured": true, 00:45:25.736 "data_offset": 0, 00:45:25.736 "data_size": 65536 00:45:25.736 } 00:45:25.736 ] 00:45:25.736 }' 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:45:25.736 16:22:29 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:45:25.993 [2024-07-22 16:22:30.172634] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:45:25.993 [2024-07-22 16:22:30.172697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:25.993 [2024-07-22 16:22:30.186216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b0d0 00:45:25.993 [2024-07-22 16:22:30.195375] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:25.993 16:22:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:27.366 "name": "raid_bdev1", 00:45:27.366 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:27.366 "strip_size_kb": 64, 00:45:27.366 "state": "online", 00:45:27.366 "raid_level": "raid5f", 00:45:27.366 "superblock": false, 00:45:27.366 "num_base_bdevs": 4, 00:45:27.366 "num_base_bdevs_discovered": 4, 00:45:27.366 "num_base_bdevs_operational": 4, 00:45:27.366 "process": { 00:45:27.366 "type": "rebuild", 00:45:27.366 "target": "spare", 00:45:27.366 "progress": { 00:45:27.366 "blocks": 23040, 00:45:27.366 "percent": 11 00:45:27.366 } 00:45:27.366 }, 00:45:27.366 "base_bdevs_list": [ 00:45:27.366 { 00:45:27.366 "name": "spare", 00:45:27.366 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:27.366 "is_configured": true, 00:45:27.366 "data_offset": 0, 00:45:27.366 "data_size": 65536 00:45:27.366 }, 00:45:27.366 { 00:45:27.366 "name": "BaseBdev2", 00:45:27.366 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:27.366 "is_configured": true, 00:45:27.366 "data_offset": 0, 00:45:27.366 "data_size": 65536 00:45:27.366 }, 00:45:27.366 { 00:45:27.366 "name": "BaseBdev3", 00:45:27.366 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:27.366 "is_configured": true, 00:45:27.366 "data_offset": 0, 00:45:27.366 "data_size": 65536 00:45:27.366 }, 00:45:27.366 { 00:45:27.366 "name": "BaseBdev4", 00:45:27.366 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:27.366 "is_configured": true, 00:45:27.366 "data_offset": 0, 00:45:27.366 "data_size": 65536 00:45:27.366 } 00:45:27.366 ] 00:45:27.366 }' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@657 -- # local timeout=716 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:27.366 16:22:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:27.625 "name": "raid_bdev1", 00:45:27.625 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:27.625 "strip_size_kb": 64, 00:45:27.625 "state": "online", 00:45:27.625 "raid_level": "raid5f", 00:45:27.625 "superblock": false, 00:45:27.625 "num_base_bdevs": 4, 00:45:27.625 "num_base_bdevs_discovered": 4, 00:45:27.625 "num_base_bdevs_operational": 4, 00:45:27.625 "process": { 00:45:27.625 "type": "rebuild", 00:45:27.625 "target": "spare", 00:45:27.625 "progress": { 00:45:27.625 "blocks": 26880, 00:45:27.625 "percent": 13 00:45:27.625 } 00:45:27.625 }, 00:45:27.625 "base_bdevs_list": [ 00:45:27.625 { 00:45:27.625 "name": "spare", 00:45:27.625 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:27.625 "is_configured": true, 00:45:27.625 "data_offset": 0, 00:45:27.625 "data_size": 65536 00:45:27.625 }, 00:45:27.625 { 00:45:27.625 "name": "BaseBdev2", 00:45:27.625 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:27.625 "is_configured": true, 00:45:27.625 "data_offset": 0, 00:45:27.625 "data_size": 65536 00:45:27.625 }, 00:45:27.625 { 00:45:27.625 "name": "BaseBdev3", 00:45:27.625 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:27.625 "is_configured": true, 00:45:27.625 "data_offset": 0, 00:45:27.625 "data_size": 65536 00:45:27.625 }, 00:45:27.625 { 00:45:27.625 "name": "BaseBdev4", 00:45:27.625 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:27.625 "is_configured": true, 00:45:27.625 "data_offset": 0, 00:45:27.625 "data_size": 65536 00:45:27.625 } 00:45:27.625 ] 00:45:27.625 }' 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:27.625 16:22:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:28.559 16:22:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:28.817 "name": "raid_bdev1", 00:45:28.817 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:28.817 "strip_size_kb": 64, 00:45:28.817 "state": "online", 00:45:28.817 "raid_level": "raid5f", 00:45:28.817 "superblock": false, 00:45:28.817 "num_base_bdevs": 4, 00:45:28.817 "num_base_bdevs_discovered": 4, 00:45:28.817 "num_base_bdevs_operational": 4, 00:45:28.817 "process": { 00:45:28.817 "type": "rebuild", 00:45:28.817 "target": "spare", 00:45:28.817 "progress": { 00:45:28.817 "blocks": 51840, 00:45:28.817 "percent": 26 00:45:28.817 } 00:45:28.817 }, 00:45:28.817 "base_bdevs_list": [ 00:45:28.817 { 00:45:28.817 "name": "spare", 00:45:28.817 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:28.817 "is_configured": true, 00:45:28.817 "data_offset": 0, 00:45:28.817 "data_size": 65536 00:45:28.817 }, 00:45:28.817 { 00:45:28.817 "name": "BaseBdev2", 00:45:28.817 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:28.817 "is_configured": true, 00:45:28.817 "data_offset": 0, 00:45:28.817 "data_size": 65536 00:45:28.817 }, 00:45:28.817 { 00:45:28.817 "name": "BaseBdev3", 00:45:28.817 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:28.817 "is_configured": true, 00:45:28.817 "data_offset": 0, 00:45:28.817 "data_size": 65536 00:45:28.817 }, 00:45:28.817 { 00:45:28.817 "name": "BaseBdev4", 00:45:28.817 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:28.817 "is_configured": true, 00:45:28.817 "data_offset": 0, 00:45:28.817 "data_size": 65536 00:45:28.817 } 00:45:28.817 ] 00:45:28.817 }' 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:28.817 16:22:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:30.207 "name": "raid_bdev1", 00:45:30.207 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:30.207 "strip_size_kb": 64, 00:45:30.207 "state": "online", 00:45:30.207 "raid_level": "raid5f", 00:45:30.207 "superblock": false, 00:45:30.207 "num_base_bdevs": 4, 00:45:30.207 "num_base_bdevs_discovered": 4, 00:45:30.207 "num_base_bdevs_operational": 4, 00:45:30.207 "process": { 00:45:30.207 "type": "rebuild", 00:45:30.207 "target": "spare", 00:45:30.207 "progress": { 00:45:30.207 "blocks": 76800, 00:45:30.207 "percent": 39 00:45:30.207 } 00:45:30.207 }, 00:45:30.207 "base_bdevs_list": [ 00:45:30.207 { 00:45:30.207 "name": "spare", 00:45:30.207 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:30.207 "is_configured": true, 00:45:30.207 "data_offset": 0, 00:45:30.207 "data_size": 65536 00:45:30.207 }, 00:45:30.207 { 00:45:30.207 "name": "BaseBdev2", 00:45:30.207 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:30.207 "is_configured": true, 00:45:30.207 "data_offset": 0, 00:45:30.207 "data_size": 65536 00:45:30.207 }, 00:45:30.207 { 00:45:30.207 "name": "BaseBdev3", 00:45:30.207 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:30.207 "is_configured": true, 00:45:30.207 "data_offset": 0, 00:45:30.207 "data_size": 65536 00:45:30.207 }, 00:45:30.207 { 00:45:30.207 "name": "BaseBdev4", 00:45:30.207 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:30.207 "is_configured": true, 00:45:30.207 "data_offset": 0, 00:45:30.207 "data_size": 65536 00:45:30.207 } 00:45:30.207 ] 00:45:30.207 }' 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:30.207 16:22:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:31.142 16:22:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:31.400 16:22:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:31.400 "name": "raid_bdev1", 00:45:31.400 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:31.400 "strip_size_kb": 64, 00:45:31.400 "state": "online", 00:45:31.400 "raid_level": "raid5f", 00:45:31.400 "superblock": false, 00:45:31.400 "num_base_bdevs": 4, 00:45:31.400 "num_base_bdevs_discovered": 4, 00:45:31.400 "num_base_bdevs_operational": 4, 00:45:31.400 "process": { 00:45:31.400 "type": "rebuild", 00:45:31.400 "target": "spare", 00:45:31.401 "progress": { 00:45:31.401 "blocks": 101760, 00:45:31.401 "percent": 51 00:45:31.401 } 00:45:31.401 }, 00:45:31.401 "base_bdevs_list": [ 00:45:31.401 { 00:45:31.401 "name": "spare", 00:45:31.401 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:31.401 "is_configured": true, 00:45:31.401 "data_offset": 0, 00:45:31.401 "data_size": 65536 00:45:31.401 }, 00:45:31.401 { 00:45:31.401 "name": "BaseBdev2", 00:45:31.401 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:31.401 "is_configured": true, 00:45:31.401 "data_offset": 0, 00:45:31.401 "data_size": 65536 00:45:31.401 }, 00:45:31.401 { 00:45:31.401 "name": "BaseBdev3", 00:45:31.401 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:31.401 "is_configured": true, 00:45:31.401 "data_offset": 0, 00:45:31.401 "data_size": 65536 00:45:31.401 }, 00:45:31.401 { 00:45:31.401 "name": "BaseBdev4", 00:45:31.401 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:31.401 "is_configured": true, 00:45:31.401 "data_offset": 0, 00:45:31.401 "data_size": 65536 00:45:31.401 } 00:45:31.401 ] 00:45:31.401 }' 00:45:31.401 16:22:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:31.401 16:22:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:31.401 16:22:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:31.401 16:22:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:31.401 16:22:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:32.775 "name": "raid_bdev1", 00:45:32.775 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:32.775 "strip_size_kb": 64, 00:45:32.775 "state": "online", 00:45:32.775 "raid_level": "raid5f", 00:45:32.775 "superblock": false, 00:45:32.775 "num_base_bdevs": 4, 00:45:32.775 "num_base_bdevs_discovered": 4, 00:45:32.775 "num_base_bdevs_operational": 4, 00:45:32.775 "process": { 00:45:32.775 "type": "rebuild", 00:45:32.775 "target": "spare", 00:45:32.775 "progress": { 00:45:32.775 "blocks": 126720, 00:45:32.775 "percent": 64 00:45:32.775 } 00:45:32.775 }, 00:45:32.775 "base_bdevs_list": [ 00:45:32.775 { 00:45:32.775 "name": "spare", 00:45:32.775 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:32.775 "is_configured": true, 00:45:32.775 "data_offset": 0, 00:45:32.775 "data_size": 65536 00:45:32.775 }, 00:45:32.775 { 00:45:32.775 "name": "BaseBdev2", 00:45:32.775 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:32.775 "is_configured": true, 00:45:32.775 "data_offset": 0, 00:45:32.775 "data_size": 65536 00:45:32.775 }, 00:45:32.775 { 00:45:32.775 "name": "BaseBdev3", 00:45:32.775 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:32.775 "is_configured": true, 00:45:32.775 "data_offset": 0, 00:45:32.775 "data_size": 65536 00:45:32.775 }, 00:45:32.775 { 00:45:32.775 "name": "BaseBdev4", 00:45:32.775 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:32.775 "is_configured": true, 00:45:32.775 "data_offset": 0, 00:45:32.775 "data_size": 65536 00:45:32.775 } 00:45:32.775 ] 00:45:32.775 }' 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:32.775 16:22:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:33.725 16:22:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:33.725 16:22:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:33.726 16:22:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:33.983 16:22:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:33.983 "name": "raid_bdev1", 00:45:33.983 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:33.983 "strip_size_kb": 64, 00:45:33.983 "state": "online", 00:45:33.983 "raid_level": "raid5f", 00:45:33.983 "superblock": false, 00:45:33.983 "num_base_bdevs": 4, 00:45:33.983 "num_base_bdevs_discovered": 4, 00:45:33.983 "num_base_bdevs_operational": 4, 00:45:33.983 "process": { 00:45:33.983 "type": "rebuild", 00:45:33.983 "target": "spare", 00:45:33.983 "progress": { 00:45:33.983 "blocks": 151680, 00:45:33.983 "percent": 77 00:45:33.983 } 00:45:33.983 }, 00:45:33.983 "base_bdevs_list": [ 00:45:33.983 { 00:45:33.983 "name": "spare", 00:45:33.983 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:33.983 "is_configured": true, 00:45:33.984 "data_offset": 0, 00:45:33.984 "data_size": 65536 00:45:33.984 }, 00:45:33.984 { 00:45:33.984 "name": "BaseBdev2", 00:45:33.984 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:33.984 "is_configured": true, 00:45:33.984 "data_offset": 0, 00:45:33.984 "data_size": 65536 00:45:33.984 }, 00:45:33.984 { 00:45:33.984 "name": "BaseBdev3", 00:45:33.984 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:33.984 "is_configured": true, 00:45:33.984 "data_offset": 0, 00:45:33.984 "data_size": 65536 00:45:33.984 }, 00:45:33.984 { 00:45:33.984 "name": "BaseBdev4", 00:45:33.984 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:33.984 "is_configured": true, 00:45:33.984 "data_offset": 0, 00:45:33.984 "data_size": 65536 00:45:33.984 } 00:45:33.984 ] 00:45:33.984 }' 00:45:33.984 16:22:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:33.984 16:22:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:33.984 16:22:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:33.984 16:22:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:33.984 16:22:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:35.357 16:22:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:35.358 "name": "raid_bdev1", 00:45:35.358 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:35.358 "strip_size_kb": 64, 00:45:35.358 "state": "online", 00:45:35.358 "raid_level": "raid5f", 00:45:35.358 "superblock": false, 00:45:35.358 "num_base_bdevs": 4, 00:45:35.358 "num_base_bdevs_discovered": 4, 00:45:35.358 "num_base_bdevs_operational": 4, 00:45:35.358 "process": { 00:45:35.358 "type": "rebuild", 00:45:35.358 "target": "spare", 00:45:35.358 "progress": { 00:45:35.358 "blocks": 176640, 00:45:35.358 "percent": 89 00:45:35.358 } 00:45:35.358 }, 00:45:35.358 "base_bdevs_list": [ 00:45:35.358 { 00:45:35.358 "name": "spare", 00:45:35.358 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:35.358 "is_configured": true, 00:45:35.358 "data_offset": 0, 00:45:35.358 "data_size": 65536 00:45:35.358 }, 00:45:35.358 { 00:45:35.358 "name": "BaseBdev2", 00:45:35.358 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:35.358 "is_configured": true, 00:45:35.358 "data_offset": 0, 00:45:35.358 "data_size": 65536 00:45:35.358 }, 00:45:35.358 { 00:45:35.358 "name": "BaseBdev3", 00:45:35.358 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:35.358 "is_configured": true, 00:45:35.358 "data_offset": 0, 00:45:35.358 "data_size": 65536 00:45:35.358 }, 00:45:35.358 { 00:45:35.358 "name": "BaseBdev4", 00:45:35.358 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:35.358 "is_configured": true, 00:45:35.358 "data_offset": 0, 00:45:35.358 "data_size": 65536 00:45:35.358 } 00:45:35.358 ] 00:45:35.358 }' 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:35.358 16:22:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:36.294 16:22:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:36.553 [2024-07-22 16:22:40.599711] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:45:36.553 [2024-07-22 16:22:40.599830] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:45:36.553 [2024-07-22 16:22:40.599916] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:36.553 16:22:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:36.553 "name": "raid_bdev1", 00:45:36.553 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:36.553 "strip_size_kb": 64, 00:45:36.553 "state": "online", 00:45:36.553 "raid_level": "raid5f", 00:45:36.553 "superblock": false, 00:45:36.553 "num_base_bdevs": 4, 00:45:36.553 "num_base_bdevs_discovered": 4, 00:45:36.553 "num_base_bdevs_operational": 4, 00:45:36.553 "base_bdevs_list": [ 00:45:36.553 { 00:45:36.553 "name": "spare", 00:45:36.553 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:36.553 "is_configured": true, 00:45:36.553 "data_offset": 0, 00:45:36.553 "data_size": 65536 00:45:36.553 }, 00:45:36.553 { 00:45:36.553 "name": "BaseBdev2", 00:45:36.553 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:36.553 "is_configured": true, 00:45:36.553 "data_offset": 0, 00:45:36.553 "data_size": 65536 00:45:36.553 }, 00:45:36.553 { 00:45:36.553 "name": "BaseBdev3", 00:45:36.553 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:36.553 "is_configured": true, 00:45:36.553 "data_offset": 0, 00:45:36.553 "data_size": 65536 00:45:36.553 }, 00:45:36.553 { 00:45:36.553 "name": "BaseBdev4", 00:45:36.553 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:36.553 "is_configured": true, 00:45:36.553 "data_offset": 0, 00:45:36.553 "data_size": 65536 00:45:36.553 } 00:45:36.553 ] 00:45:36.553 }' 00:45:36.553 16:22:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@660 -- # break 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:36.812 16:22:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:36.812 "name": "raid_bdev1", 00:45:36.812 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:36.812 "strip_size_kb": 64, 00:45:36.812 "state": "online", 00:45:36.812 "raid_level": "raid5f", 00:45:36.812 "superblock": false, 00:45:36.812 "num_base_bdevs": 4, 00:45:36.812 "num_base_bdevs_discovered": 4, 00:45:36.812 "num_base_bdevs_operational": 4, 00:45:36.812 "base_bdevs_list": [ 00:45:36.812 { 00:45:36.812 "name": "spare", 00:45:36.812 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:36.812 "is_configured": true, 00:45:36.812 "data_offset": 0, 00:45:36.812 "data_size": 65536 00:45:36.812 }, 00:45:36.812 { 00:45:36.812 "name": "BaseBdev2", 00:45:36.812 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:36.812 "is_configured": true, 00:45:36.812 "data_offset": 0, 00:45:36.812 "data_size": 65536 00:45:36.812 }, 00:45:36.812 { 00:45:36.812 "name": "BaseBdev3", 00:45:36.812 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:36.812 "is_configured": true, 00:45:36.812 "data_offset": 0, 00:45:36.812 "data_size": 65536 00:45:36.812 }, 00:45:36.812 { 00:45:36.812 "name": "BaseBdev4", 00:45:36.812 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:36.812 "is_configured": true, 00:45:36.812 "data_offset": 0, 00:45:36.812 "data_size": 65536 00:45:36.812 } 00:45:36.812 ] 00:45:36.812 }' 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:36.812 16:22:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:36.813 16:22:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:36.813 16:22:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:36.813 16:22:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:36.813 16:22:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:36.813 16:22:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:37.072 16:22:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:37.072 16:22:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:37.072 "name": "raid_bdev1", 00:45:37.072 "uuid": "dd02f97a-f142-4bf4-8f1d-54d309778d4b", 00:45:37.072 "strip_size_kb": 64, 00:45:37.072 "state": "online", 00:45:37.072 "raid_level": "raid5f", 00:45:37.072 "superblock": false, 00:45:37.072 "num_base_bdevs": 4, 00:45:37.072 "num_base_bdevs_discovered": 4, 00:45:37.072 "num_base_bdevs_operational": 4, 00:45:37.072 "base_bdevs_list": [ 00:45:37.072 { 00:45:37.072 "name": "spare", 00:45:37.072 "uuid": "682e4cbc-6e2e-5beb-8d4c-3f9eb490afca", 00:45:37.072 "is_configured": true, 00:45:37.072 "data_offset": 0, 00:45:37.072 "data_size": 65536 00:45:37.072 }, 00:45:37.072 { 00:45:37.072 "name": "BaseBdev2", 00:45:37.072 "uuid": "5d48317e-abe9-4cb4-b47f-3f418f82019c", 00:45:37.072 "is_configured": true, 00:45:37.072 "data_offset": 0, 00:45:37.072 "data_size": 65536 00:45:37.072 }, 00:45:37.072 { 00:45:37.072 "name": "BaseBdev3", 00:45:37.072 "uuid": "e1395a1e-48ce-4371-b4a5-37792cc2c9db", 00:45:37.072 "is_configured": true, 00:45:37.072 "data_offset": 0, 00:45:37.072 "data_size": 65536 00:45:37.072 }, 00:45:37.072 { 00:45:37.072 "name": "BaseBdev4", 00:45:37.072 "uuid": "eb468be6-ce2e-49c6-ab71-b82e17867354", 00:45:37.072 "is_configured": true, 00:45:37.072 "data_offset": 0, 00:45:37.072 "data_size": 65536 00:45:37.072 } 00:45:37.072 ] 00:45:37.072 }' 00:45:37.072 16:22:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:37.072 16:22:41 -- common/autotest_common.sh@10 -- # set +x 00:45:37.639 16:22:41 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:45:37.639 [2024-07-22 16:22:41.871213] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:45:37.639 [2024-07-22 16:22:41.871287] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:45:37.639 [2024-07-22 16:22:41.871392] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:45:37.639 [2024-07-22 16:22:41.871498] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:45:37.639 [2024-07-22 16:22:41.871515] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:45:37.639 16:22:41 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:37.639 16:22:41 -- bdev/bdev_raid.sh@671 -- # jq length 00:45:37.898 16:22:42 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:45:37.898 16:22:42 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:45:37.898 16:22:42 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@12 -- # local i 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:37.898 16:22:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:45:38.160 /dev/nbd0 00:45:38.160 16:22:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:38.160 16:22:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:38.160 16:22:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:45:38.160 16:22:42 -- common/autotest_common.sh@857 -- # local i 00:45:38.160 16:22:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:45:38.160 16:22:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:45:38.160 16:22:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:45:38.160 16:22:42 -- common/autotest_common.sh@861 -- # break 00:45:38.160 16:22:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:45:38.160 16:22:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:45:38.160 16:22:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:38.160 1+0 records in 00:45:38.160 1+0 records out 00:45:38.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000919588 s, 4.5 MB/s 00:45:38.160 16:22:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:38.160 16:22:42 -- common/autotest_common.sh@874 -- # size=4096 00:45:38.160 16:22:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:38.160 16:22:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:45:38.160 16:22:42 -- common/autotest_common.sh@877 -- # return 0 00:45:38.160 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:38.160 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:38.160 16:22:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:45:38.419 /dev/nbd1 00:45:38.419 16:22:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:45:38.419 16:22:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:45:38.419 16:22:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:45:38.419 16:22:42 -- common/autotest_common.sh@857 -- # local i 00:45:38.419 16:22:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:45:38.419 16:22:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:45:38.419 16:22:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:45:38.419 16:22:42 -- common/autotest_common.sh@861 -- # break 00:45:38.419 16:22:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:45:38.419 16:22:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:45:38.419 16:22:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:38.419 1+0 records in 00:45:38.419 1+0 records out 00:45:38.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352271 s, 11.6 MB/s 00:45:38.419 16:22:42 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:38.419 16:22:42 -- common/autotest_common.sh@874 -- # size=4096 00:45:38.419 16:22:42 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:38.419 16:22:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:45:38.419 16:22:42 -- common/autotest_common.sh@877 -- # return 0 00:45:38.419 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:38.419 16:22:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:45:38.419 16:22:42 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:45:38.677 16:22:42 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:45:38.677 16:22:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:38.677 16:22:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:45:38.677 16:22:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:38.677 16:22:42 -- bdev/nbd_common.sh@51 -- # local i 00:45:38.677 16:22:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:38.678 16:22:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@41 -- # break 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@45 -- # return 0 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:38.936 16:22:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@41 -- # break 00:45:39.503 16:22:43 -- bdev/nbd_common.sh@45 -- # return 0 00:45:39.503 16:22:43 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:45:39.503 16:22:43 -- bdev/bdev_raid.sh@709 -- # killprocess 87644 00:45:39.503 16:22:43 -- common/autotest_common.sh@926 -- # '[' -z 87644 ']' 00:45:39.503 16:22:43 -- common/autotest_common.sh@930 -- # kill -0 87644 00:45:39.503 16:22:43 -- common/autotest_common.sh@931 -- # uname 00:45:39.503 16:22:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:45:39.503 16:22:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87644 00:45:39.503 killing process with pid 87644 00:45:39.503 Received shutdown signal, test time was about 60.000000 seconds 00:45:39.503 00:45:39.503 Latency(us) 00:45:39.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.503 =================================================================================================================== 00:45:39.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:45:39.503 16:22:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:45:39.503 16:22:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:45:39.503 16:22:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87644' 00:45:39.503 16:22:43 -- common/autotest_common.sh@945 -- # kill 87644 00:45:39.503 16:22:43 -- common/autotest_common.sh@950 -- # wait 87644 00:45:39.503 [2024-07-22 16:22:43.503180] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:45:39.762 [2024-07-22 16:22:43.956945] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:45:41.138 ************************************ 00:45:41.138 END TEST raid5f_rebuild_test 00:45:41.138 ************************************ 00:45:41.138 16:22:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:45:41.138 00:45:41.138 real 0m24.331s 00:45:41.138 user 0m33.198s 00:45:41.138 sys 0m3.332s 00:45:41.138 16:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:45:41.138 16:22:45 -- common/autotest_common.sh@10 -- # set +x 00:45:41.138 16:22:45 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:45:41.138 16:22:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:45:41.138 16:22:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:45:41.139 16:22:45 -- common/autotest_common.sh@10 -- # set +x 00:45:41.139 ************************************ 00:45:41.139 START TEST raid5f_rebuild_test_sb 00:45:41.139 ************************************ 00:45:41.139 16:22:45 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=88209 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 88209 /var/tmp/spdk-raid.sock 00:45:41.139 16:22:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:45:41.139 16:22:45 -- common/autotest_common.sh@819 -- # '[' -z 88209 ']' 00:45:41.139 16:22:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:45:41.139 16:22:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:45:41.139 16:22:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:45:41.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:45:41.139 16:22:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:45:41.139 16:22:45 -- common/autotest_common.sh@10 -- # set +x 00:45:41.139 [2024-07-22 16:22:45.388266] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:45:41.139 [2024-07-22 16:22:45.388720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88209 ] 00:45:41.139 I/O size of 3145728 is greater than zero copy threshold (65536). 00:45:41.139 Zero copy mechanism will not be used. 00:45:41.397 [2024-07-22 16:22:45.568810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:41.655 [2024-07-22 16:22:45.841746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:41.914 [2024-07-22 16:22:46.066372] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:45:42.172 16:22:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:45:42.172 16:22:46 -- common/autotest_common.sh@852 -- # return 0 00:45:42.172 16:22:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:42.172 16:22:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:45:42.172 16:22:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:45:42.430 BaseBdev1_malloc 00:45:42.430 16:22:46 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:45:42.688 [2024-07-22 16:22:46.843050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:45:42.688 [2024-07-22 16:22:46.843195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:42.688 [2024-07-22 16:22:46.843246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:45:42.688 [2024-07-22 16:22:46.843268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:42.688 [2024-07-22 16:22:46.846763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:42.688 [2024-07-22 16:22:46.846974] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:45:42.688 BaseBdev1 00:45:42.688 16:22:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:42.688 16:22:46 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:45:42.688 16:22:46 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:45:42.946 BaseBdev2_malloc 00:45:42.946 16:22:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:45:43.204 [2024-07-22 16:22:47.384246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:45:43.204 [2024-07-22 16:22:47.384395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:43.204 [2024-07-22 16:22:47.384466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:45:43.204 [2024-07-22 16:22:47.384494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:43.204 [2024-07-22 16:22:47.387607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:43.204 [2024-07-22 16:22:47.387672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:45:43.204 BaseBdev2 00:45:43.204 16:22:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:43.204 16:22:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:45:43.204 16:22:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:45:43.462 BaseBdev3_malloc 00:45:43.462 16:22:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:45:43.720 [2024-07-22 16:22:47.870495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:45:43.720 [2024-07-22 16:22:47.870609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:43.720 [2024-07-22 16:22:47.870650] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:45:43.720 [2024-07-22 16:22:47.870669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:43.720 [2024-07-22 16:22:47.873743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:43.720 [2024-07-22 16:22:47.873796] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:45:43.720 BaseBdev3 00:45:43.720 16:22:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:45:43.720 16:22:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:45:43.720 16:22:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:45:43.979 BaseBdev4_malloc 00:45:43.979 16:22:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:45:44.237 [2024-07-22 16:22:48.376287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:45:44.237 [2024-07-22 16:22:48.376449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:44.237 [2024-07-22 16:22:48.376514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:45:44.237 [2024-07-22 16:22:48.376536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:44.237 [2024-07-22 16:22:48.379620] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:44.237 BaseBdev4 00:45:44.237 [2024-07-22 16:22:48.379843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:45:44.237 16:22:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:45:44.495 spare_malloc 00:45:44.495 16:22:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:45:44.753 spare_delay 00:45:44.753 16:22:48 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:45:45.011 [2024-07-22 16:22:49.115806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:45:45.011 [2024-07-22 16:22:49.116206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:45.011 [2024-07-22 16:22:49.116415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:45:45.011 [2024-07-22 16:22:49.116573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:45.011 [2024-07-22 16:22:49.119790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:45.011 [2024-07-22 16:22:49.119962] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:45:45.011 spare 00:45:45.011 16:22:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:45:45.269 [2024-07-22 16:22:49.340568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:45:45.269 [2024-07-22 16:22:49.343211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:45:45.269 [2024-07-22 16:22:49.343434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:45:45.269 [2024-07-22 16:22:49.343562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:45:45.269 [2024-07-22 16:22:49.343921] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:45:45.269 [2024-07-22 16:22:49.344100] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:45:45.269 [2024-07-22 16:22:49.344272] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:45:45.269 [2024-07-22 16:22:49.351686] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:45:45.269 [2024-07-22 16:22:49.351819] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:45:45.269 [2024-07-22 16:22:49.352212] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:45.269 16:22:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:45.527 16:22:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:45.528 "name": "raid_bdev1", 00:45:45.528 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:45.528 "strip_size_kb": 64, 00:45:45.528 "state": "online", 00:45:45.528 "raid_level": "raid5f", 00:45:45.528 "superblock": true, 00:45:45.528 "num_base_bdevs": 4, 00:45:45.528 "num_base_bdevs_discovered": 4, 00:45:45.528 "num_base_bdevs_operational": 4, 00:45:45.528 "base_bdevs_list": [ 00:45:45.528 { 00:45:45.528 "name": "BaseBdev1", 00:45:45.528 "uuid": "fd8a2253-5a60-5b74-a1f6-c7107ddfc62f", 00:45:45.528 "is_configured": true, 00:45:45.528 "data_offset": 2048, 00:45:45.528 "data_size": 63488 00:45:45.528 }, 00:45:45.528 { 00:45:45.528 "name": "BaseBdev2", 00:45:45.528 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:45.528 "is_configured": true, 00:45:45.528 "data_offset": 2048, 00:45:45.528 "data_size": 63488 00:45:45.528 }, 00:45:45.528 { 00:45:45.528 "name": "BaseBdev3", 00:45:45.528 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:45.528 "is_configured": true, 00:45:45.528 "data_offset": 2048, 00:45:45.528 "data_size": 63488 00:45:45.528 }, 00:45:45.528 { 00:45:45.528 "name": "BaseBdev4", 00:45:45.528 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:45.528 "is_configured": true, 00:45:45.528 "data_offset": 2048, 00:45:45.528 "data_size": 63488 00:45:45.528 } 00:45:45.528 ] 00:45:45.528 }' 00:45:45.528 16:22:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:45.528 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:45:45.787 16:22:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:45:45.787 16:22:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:45:46.045 [2024-07-22 16:22:50.264494] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:45:46.045 16:22:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:45:46.045 16:22:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:46.045 16:22:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:45:46.611 16:22:50 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:45:46.611 16:22:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:45:46.611 16:22:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:45:46.611 16:22:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@12 -- # local i 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:46.611 16:22:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:45:46.611 [2024-07-22 16:22:50.852620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:45:46.611 /dev/nbd0 00:45:46.869 16:22:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:45:46.870 16:22:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:45:46.870 16:22:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:45:46.870 16:22:50 -- common/autotest_common.sh@857 -- # local i 00:45:46.870 16:22:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:45:46.870 16:22:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:45:46.870 16:22:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:45:46.870 16:22:50 -- common/autotest_common.sh@861 -- # break 00:45:46.870 16:22:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:45:46.870 16:22:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:45:46.870 16:22:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:45:46.870 1+0 records in 00:45:46.870 1+0 records out 00:45:46.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584018 s, 7.0 MB/s 00:45:46.870 16:22:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:46.870 16:22:50 -- common/autotest_common.sh@874 -- # size=4096 00:45:46.870 16:22:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:45:46.870 16:22:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:45:46.870 16:22:50 -- common/autotest_common.sh@877 -- # return 0 00:45:46.870 16:22:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:45:46.870 16:22:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:45:46.870 16:22:50 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:45:46.870 16:22:50 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:45:46.870 16:22:50 -- bdev/bdev_raid.sh@582 -- # echo 192 00:45:46.870 16:22:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:45:47.437 496+0 records in 00:45:47.437 496+0 records out 00:45:47.437 97517568 bytes (98 MB, 93 MiB) copied, 0.602277 s, 162 MB/s 00:45:47.437 16:22:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@51 -- # local i 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:45:47.437 16:22:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:45:47.696 [2024-07-22 16:22:51.737679] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@41 -- # break 00:45:47.696 16:22:51 -- bdev/nbd_common.sh@45 -- # return 0 00:45:47.696 16:22:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:45:47.953 [2024-07-22 16:22:52.021774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:47.954 16:22:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:48.212 16:22:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:48.212 "name": "raid_bdev1", 00:45:48.212 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:48.212 "strip_size_kb": 64, 00:45:48.212 "state": "online", 00:45:48.212 "raid_level": "raid5f", 00:45:48.212 "superblock": true, 00:45:48.212 "num_base_bdevs": 4, 00:45:48.212 "num_base_bdevs_discovered": 3, 00:45:48.212 "num_base_bdevs_operational": 3, 00:45:48.212 "base_bdevs_list": [ 00:45:48.212 { 00:45:48.212 "name": null, 00:45:48.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:48.212 "is_configured": false, 00:45:48.212 "data_offset": 2048, 00:45:48.212 "data_size": 63488 00:45:48.212 }, 00:45:48.212 { 00:45:48.212 "name": "BaseBdev2", 00:45:48.212 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:48.212 "is_configured": true, 00:45:48.212 "data_offset": 2048, 00:45:48.212 "data_size": 63488 00:45:48.212 }, 00:45:48.212 { 00:45:48.212 "name": "BaseBdev3", 00:45:48.212 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:48.212 "is_configured": true, 00:45:48.212 "data_offset": 2048, 00:45:48.212 "data_size": 63488 00:45:48.212 }, 00:45:48.212 { 00:45:48.212 "name": "BaseBdev4", 00:45:48.212 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:48.212 "is_configured": true, 00:45:48.212 "data_offset": 2048, 00:45:48.212 "data_size": 63488 00:45:48.212 } 00:45:48.212 ] 00:45:48.212 }' 00:45:48.212 16:22:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:48.212 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:45:48.470 16:22:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:45:48.729 [2024-07-22 16:22:52.874005] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:45:48.729 [2024-07-22 16:22:52.874145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:48.729 [2024-07-22 16:22:52.888025] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a300 00:45:48.729 [2024-07-22 16:22:52.897262] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:48.729 16:22:52 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:49.664 16:22:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:50.230 "name": "raid_bdev1", 00:45:50.230 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:50.230 "strip_size_kb": 64, 00:45:50.230 "state": "online", 00:45:50.230 "raid_level": "raid5f", 00:45:50.230 "superblock": true, 00:45:50.230 "num_base_bdevs": 4, 00:45:50.230 "num_base_bdevs_discovered": 4, 00:45:50.230 "num_base_bdevs_operational": 4, 00:45:50.230 "process": { 00:45:50.230 "type": "rebuild", 00:45:50.230 "target": "spare", 00:45:50.230 "progress": { 00:45:50.230 "blocks": 23040, 00:45:50.230 "percent": 12 00:45:50.230 } 00:45:50.230 }, 00:45:50.230 "base_bdevs_list": [ 00:45:50.230 { 00:45:50.230 "name": "spare", 00:45:50.230 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:50.230 "is_configured": true, 00:45:50.230 "data_offset": 2048, 00:45:50.230 "data_size": 63488 00:45:50.230 }, 00:45:50.230 { 00:45:50.230 "name": "BaseBdev2", 00:45:50.230 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:50.230 "is_configured": true, 00:45:50.230 "data_offset": 2048, 00:45:50.230 "data_size": 63488 00:45:50.230 }, 00:45:50.230 { 00:45:50.230 "name": "BaseBdev3", 00:45:50.230 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:50.230 "is_configured": true, 00:45:50.230 "data_offset": 2048, 00:45:50.230 "data_size": 63488 00:45:50.230 }, 00:45:50.230 { 00:45:50.230 "name": "BaseBdev4", 00:45:50.230 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:50.230 "is_configured": true, 00:45:50.230 "data_offset": 2048, 00:45:50.230 "data_size": 63488 00:45:50.230 } 00:45:50.230 ] 00:45:50.230 }' 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:50.230 16:22:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:45:50.230 [2024-07-22 16:22:54.485511] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:45:50.489 [2024-07-22 16:22:54.517220] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:45:50.489 [2024-07-22 16:22:54.517570] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:50.489 16:22:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:50.747 16:22:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:45:50.747 "name": "raid_bdev1", 00:45:50.747 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:50.747 "strip_size_kb": 64, 00:45:50.747 "state": "online", 00:45:50.747 "raid_level": "raid5f", 00:45:50.747 "superblock": true, 00:45:50.747 "num_base_bdevs": 4, 00:45:50.747 "num_base_bdevs_discovered": 3, 00:45:50.747 "num_base_bdevs_operational": 3, 00:45:50.747 "base_bdevs_list": [ 00:45:50.747 { 00:45:50.747 "name": null, 00:45:50.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:50.747 "is_configured": false, 00:45:50.747 "data_offset": 2048, 00:45:50.747 "data_size": 63488 00:45:50.747 }, 00:45:50.747 { 00:45:50.747 "name": "BaseBdev2", 00:45:50.747 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:50.747 "is_configured": true, 00:45:50.747 "data_offset": 2048, 00:45:50.747 "data_size": 63488 00:45:50.747 }, 00:45:50.747 { 00:45:50.747 "name": "BaseBdev3", 00:45:50.747 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:50.747 "is_configured": true, 00:45:50.747 "data_offset": 2048, 00:45:50.747 "data_size": 63488 00:45:50.747 }, 00:45:50.747 { 00:45:50.747 "name": "BaseBdev4", 00:45:50.747 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:50.748 "is_configured": true, 00:45:50.748 "data_offset": 2048, 00:45:50.748 "data_size": 63488 00:45:50.748 } 00:45:50.748 ] 00:45:50.748 }' 00:45:50.748 16:22:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:45:50.748 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:51.010 16:22:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:51.269 "name": "raid_bdev1", 00:45:51.269 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:51.269 "strip_size_kb": 64, 00:45:51.269 "state": "online", 00:45:51.269 "raid_level": "raid5f", 00:45:51.269 "superblock": true, 00:45:51.269 "num_base_bdevs": 4, 00:45:51.269 "num_base_bdevs_discovered": 3, 00:45:51.269 "num_base_bdevs_operational": 3, 00:45:51.269 "base_bdevs_list": [ 00:45:51.269 { 00:45:51.269 "name": null, 00:45:51.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:45:51.269 "is_configured": false, 00:45:51.269 "data_offset": 2048, 00:45:51.269 "data_size": 63488 00:45:51.269 }, 00:45:51.269 { 00:45:51.269 "name": "BaseBdev2", 00:45:51.269 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:51.269 "is_configured": true, 00:45:51.269 "data_offset": 2048, 00:45:51.269 "data_size": 63488 00:45:51.269 }, 00:45:51.269 { 00:45:51.269 "name": "BaseBdev3", 00:45:51.269 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:51.269 "is_configured": true, 00:45:51.269 "data_offset": 2048, 00:45:51.269 "data_size": 63488 00:45:51.269 }, 00:45:51.269 { 00:45:51.269 "name": "BaseBdev4", 00:45:51.269 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:51.269 "is_configured": true, 00:45:51.269 "data_offset": 2048, 00:45:51.269 "data_size": 63488 00:45:51.269 } 00:45:51.269 ] 00:45:51.269 }' 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:45:51.269 16:22:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:45:51.528 [2024-07-22 16:22:55.658583] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:45:51.528 [2024-07-22 16:22:55.658641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:45:51.528 [2024-07-22 16:22:55.672075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a3d0 00:45:51.528 [2024-07-22 16:22:55.681388] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:45:51.528 16:22:55 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:52.463 16:22:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:52.721 16:22:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:52.721 "name": "raid_bdev1", 00:45:52.721 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:52.721 "strip_size_kb": 64, 00:45:52.721 "state": "online", 00:45:52.721 "raid_level": "raid5f", 00:45:52.721 "superblock": true, 00:45:52.721 "num_base_bdevs": 4, 00:45:52.721 "num_base_bdevs_discovered": 4, 00:45:52.721 "num_base_bdevs_operational": 4, 00:45:52.721 "process": { 00:45:52.721 "type": "rebuild", 00:45:52.721 "target": "spare", 00:45:52.721 "progress": { 00:45:52.721 "blocks": 23040, 00:45:52.721 "percent": 12 00:45:52.721 } 00:45:52.721 }, 00:45:52.721 "base_bdevs_list": [ 00:45:52.721 { 00:45:52.721 "name": "spare", 00:45:52.721 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:52.721 "is_configured": true, 00:45:52.721 "data_offset": 2048, 00:45:52.721 "data_size": 63488 00:45:52.721 }, 00:45:52.721 { 00:45:52.721 "name": "BaseBdev2", 00:45:52.721 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:52.721 "is_configured": true, 00:45:52.721 "data_offset": 2048, 00:45:52.721 "data_size": 63488 00:45:52.721 }, 00:45:52.721 { 00:45:52.721 "name": "BaseBdev3", 00:45:52.721 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:52.721 "is_configured": true, 00:45:52.721 "data_offset": 2048, 00:45:52.721 "data_size": 63488 00:45:52.721 }, 00:45:52.721 { 00:45:52.721 "name": "BaseBdev4", 00:45:52.721 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:52.721 "is_configured": true, 00:45:52.721 "data_offset": 2048, 00:45:52.721 "data_size": 63488 00:45:52.721 } 00:45:52.721 ] 00:45:52.721 }' 00:45:52.721 16:22:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:52.721 16:22:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:52.721 16:22:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:45:52.980 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@657 -- # local timeout=741 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:52.980 16:22:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:53.255 "name": "raid_bdev1", 00:45:53.255 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:53.255 "strip_size_kb": 64, 00:45:53.255 "state": "online", 00:45:53.255 "raid_level": "raid5f", 00:45:53.255 "superblock": true, 00:45:53.255 "num_base_bdevs": 4, 00:45:53.255 "num_base_bdevs_discovered": 4, 00:45:53.255 "num_base_bdevs_operational": 4, 00:45:53.255 "process": { 00:45:53.255 "type": "rebuild", 00:45:53.255 "target": "spare", 00:45:53.255 "progress": { 00:45:53.255 "blocks": 28800, 00:45:53.255 "percent": 15 00:45:53.255 } 00:45:53.255 }, 00:45:53.255 "base_bdevs_list": [ 00:45:53.255 { 00:45:53.255 "name": "spare", 00:45:53.255 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:53.255 "is_configured": true, 00:45:53.255 "data_offset": 2048, 00:45:53.255 "data_size": 63488 00:45:53.255 }, 00:45:53.255 { 00:45:53.255 "name": "BaseBdev2", 00:45:53.255 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:53.255 "is_configured": true, 00:45:53.255 "data_offset": 2048, 00:45:53.255 "data_size": 63488 00:45:53.255 }, 00:45:53.255 { 00:45:53.255 "name": "BaseBdev3", 00:45:53.255 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:53.255 "is_configured": true, 00:45:53.255 "data_offset": 2048, 00:45:53.255 "data_size": 63488 00:45:53.255 }, 00:45:53.255 { 00:45:53.255 "name": "BaseBdev4", 00:45:53.255 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:53.255 "is_configured": true, 00:45:53.255 "data_offset": 2048, 00:45:53.255 "data_size": 63488 00:45:53.255 } 00:45:53.255 ] 00:45:53.255 }' 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:53.255 16:22:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:54.190 16:22:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:54.448 16:22:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:54.449 "name": "raid_bdev1", 00:45:54.449 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:54.449 "strip_size_kb": 64, 00:45:54.449 "state": "online", 00:45:54.449 "raid_level": "raid5f", 00:45:54.449 "superblock": true, 00:45:54.449 "num_base_bdevs": 4, 00:45:54.449 "num_base_bdevs_discovered": 4, 00:45:54.449 "num_base_bdevs_operational": 4, 00:45:54.449 "process": { 00:45:54.449 "type": "rebuild", 00:45:54.449 "target": "spare", 00:45:54.449 "progress": { 00:45:54.449 "blocks": 53760, 00:45:54.449 "percent": 28 00:45:54.449 } 00:45:54.449 }, 00:45:54.449 "base_bdevs_list": [ 00:45:54.449 { 00:45:54.449 "name": "spare", 00:45:54.449 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:54.449 "is_configured": true, 00:45:54.449 "data_offset": 2048, 00:45:54.449 "data_size": 63488 00:45:54.449 }, 00:45:54.449 { 00:45:54.449 "name": "BaseBdev2", 00:45:54.449 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:54.449 "is_configured": true, 00:45:54.449 "data_offset": 2048, 00:45:54.449 "data_size": 63488 00:45:54.449 }, 00:45:54.449 { 00:45:54.449 "name": "BaseBdev3", 00:45:54.449 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:54.449 "is_configured": true, 00:45:54.449 "data_offset": 2048, 00:45:54.449 "data_size": 63488 00:45:54.449 }, 00:45:54.449 { 00:45:54.449 "name": "BaseBdev4", 00:45:54.449 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:54.449 "is_configured": true, 00:45:54.449 "data_offset": 2048, 00:45:54.449 "data_size": 63488 00:45:54.449 } 00:45:54.449 ] 00:45:54.449 }' 00:45:54.449 16:22:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:54.449 16:22:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:54.449 16:22:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:54.449 16:22:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:54.449 16:22:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:55.383 16:22:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:55.641 "name": "raid_bdev1", 00:45:55.641 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:55.641 "strip_size_kb": 64, 00:45:55.641 "state": "online", 00:45:55.641 "raid_level": "raid5f", 00:45:55.641 "superblock": true, 00:45:55.641 "num_base_bdevs": 4, 00:45:55.641 "num_base_bdevs_discovered": 4, 00:45:55.641 "num_base_bdevs_operational": 4, 00:45:55.641 "process": { 00:45:55.641 "type": "rebuild", 00:45:55.641 "target": "spare", 00:45:55.641 "progress": { 00:45:55.641 "blocks": 78720, 00:45:55.641 "percent": 41 00:45:55.641 } 00:45:55.641 }, 00:45:55.641 "base_bdevs_list": [ 00:45:55.641 { 00:45:55.641 "name": "spare", 00:45:55.641 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:55.641 "is_configured": true, 00:45:55.641 "data_offset": 2048, 00:45:55.641 "data_size": 63488 00:45:55.641 }, 00:45:55.641 { 00:45:55.641 "name": "BaseBdev2", 00:45:55.641 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:55.641 "is_configured": true, 00:45:55.641 "data_offset": 2048, 00:45:55.641 "data_size": 63488 00:45:55.641 }, 00:45:55.641 { 00:45:55.641 "name": "BaseBdev3", 00:45:55.641 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:55.641 "is_configured": true, 00:45:55.641 "data_offset": 2048, 00:45:55.641 "data_size": 63488 00:45:55.641 }, 00:45:55.641 { 00:45:55.641 "name": "BaseBdev4", 00:45:55.641 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:55.641 "is_configured": true, 00:45:55.641 "data_offset": 2048, 00:45:55.641 "data_size": 63488 00:45:55.641 } 00:45:55.641 ] 00:45:55.641 }' 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:55.641 16:22:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:57.015 16:23:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:57.015 "name": "raid_bdev1", 00:45:57.015 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:57.015 "strip_size_kb": 64, 00:45:57.015 "state": "online", 00:45:57.015 "raid_level": "raid5f", 00:45:57.015 "superblock": true, 00:45:57.015 "num_base_bdevs": 4, 00:45:57.015 "num_base_bdevs_discovered": 4, 00:45:57.015 "num_base_bdevs_operational": 4, 00:45:57.015 "process": { 00:45:57.015 "type": "rebuild", 00:45:57.015 "target": "spare", 00:45:57.015 "progress": { 00:45:57.015 "blocks": 101760, 00:45:57.015 "percent": 53 00:45:57.015 } 00:45:57.015 }, 00:45:57.015 "base_bdevs_list": [ 00:45:57.015 { 00:45:57.015 "name": "spare", 00:45:57.015 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:57.015 "is_configured": true, 00:45:57.015 "data_offset": 2048, 00:45:57.015 "data_size": 63488 00:45:57.015 }, 00:45:57.015 { 00:45:57.015 "name": "BaseBdev2", 00:45:57.015 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:57.015 "is_configured": true, 00:45:57.015 "data_offset": 2048, 00:45:57.015 "data_size": 63488 00:45:57.015 }, 00:45:57.015 { 00:45:57.015 "name": "BaseBdev3", 00:45:57.015 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:57.015 "is_configured": true, 00:45:57.015 "data_offset": 2048, 00:45:57.015 "data_size": 63488 00:45:57.015 }, 00:45:57.015 { 00:45:57.015 "name": "BaseBdev4", 00:45:57.015 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:57.015 "is_configured": true, 00:45:57.015 "data_offset": 2048, 00:45:57.015 "data_size": 63488 00:45:57.015 } 00:45:57.015 ] 00:45:57.015 }' 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:57.015 16:23:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:57.984 16:23:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:58.278 "name": "raid_bdev1", 00:45:58.278 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:58.278 "strip_size_kb": 64, 00:45:58.278 "state": "online", 00:45:58.278 "raid_level": "raid5f", 00:45:58.278 "superblock": true, 00:45:58.278 "num_base_bdevs": 4, 00:45:58.278 "num_base_bdevs_discovered": 4, 00:45:58.278 "num_base_bdevs_operational": 4, 00:45:58.278 "process": { 00:45:58.278 "type": "rebuild", 00:45:58.278 "target": "spare", 00:45:58.278 "progress": { 00:45:58.278 "blocks": 126720, 00:45:58.278 "percent": 66 00:45:58.278 } 00:45:58.278 }, 00:45:58.278 "base_bdevs_list": [ 00:45:58.278 { 00:45:58.278 "name": "spare", 00:45:58.278 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:58.278 "is_configured": true, 00:45:58.278 "data_offset": 2048, 00:45:58.278 "data_size": 63488 00:45:58.278 }, 00:45:58.278 { 00:45:58.278 "name": "BaseBdev2", 00:45:58.278 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:58.278 "is_configured": true, 00:45:58.278 "data_offset": 2048, 00:45:58.278 "data_size": 63488 00:45:58.278 }, 00:45:58.278 { 00:45:58.278 "name": "BaseBdev3", 00:45:58.278 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:58.278 "is_configured": true, 00:45:58.278 "data_offset": 2048, 00:45:58.278 "data_size": 63488 00:45:58.278 }, 00:45:58.278 { 00:45:58.278 "name": "BaseBdev4", 00:45:58.278 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:58.278 "is_configured": true, 00:45:58.278 "data_offset": 2048, 00:45:58.278 "data_size": 63488 00:45:58.278 } 00:45:58.278 ] 00:45:58.278 }' 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:58.278 16:23:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:45:59.214 16:23:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:45:59.472 16:23:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:45:59.472 "name": "raid_bdev1", 00:45:59.472 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:45:59.472 "strip_size_kb": 64, 00:45:59.472 "state": "online", 00:45:59.472 "raid_level": "raid5f", 00:45:59.472 "superblock": true, 00:45:59.472 "num_base_bdevs": 4, 00:45:59.472 "num_base_bdevs_discovered": 4, 00:45:59.472 "num_base_bdevs_operational": 4, 00:45:59.472 "process": { 00:45:59.472 "type": "rebuild", 00:45:59.472 "target": "spare", 00:45:59.472 "progress": { 00:45:59.472 "blocks": 151680, 00:45:59.472 "percent": 79 00:45:59.472 } 00:45:59.473 }, 00:45:59.473 "base_bdevs_list": [ 00:45:59.473 { 00:45:59.473 "name": "spare", 00:45:59.473 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:45:59.473 "is_configured": true, 00:45:59.473 "data_offset": 2048, 00:45:59.473 "data_size": 63488 00:45:59.473 }, 00:45:59.473 { 00:45:59.473 "name": "BaseBdev2", 00:45:59.473 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:45:59.473 "is_configured": true, 00:45:59.473 "data_offset": 2048, 00:45:59.473 "data_size": 63488 00:45:59.473 }, 00:45:59.473 { 00:45:59.473 "name": "BaseBdev3", 00:45:59.473 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:45:59.473 "is_configured": true, 00:45:59.473 "data_offset": 2048, 00:45:59.473 "data_size": 63488 00:45:59.473 }, 00:45:59.473 { 00:45:59.473 "name": "BaseBdev4", 00:45:59.473 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:45:59.473 "is_configured": true, 00:45:59.473 "data_offset": 2048, 00:45:59.473 "data_size": 63488 00:45:59.473 } 00:45:59.473 ] 00:45:59.473 }' 00:45:59.473 16:23:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:45:59.730 16:23:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:45:59.730 16:23:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:45:59.730 16:23:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:45:59.730 16:23:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:00.661 16:23:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:46:00.918 "name": "raid_bdev1", 00:46:00.918 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:00.918 "strip_size_kb": 64, 00:46:00.918 "state": "online", 00:46:00.918 "raid_level": "raid5f", 00:46:00.918 "superblock": true, 00:46:00.918 "num_base_bdevs": 4, 00:46:00.918 "num_base_bdevs_discovered": 4, 00:46:00.918 "num_base_bdevs_operational": 4, 00:46:00.918 "process": { 00:46:00.918 "type": "rebuild", 00:46:00.918 "target": "spare", 00:46:00.918 "progress": { 00:46:00.918 "blocks": 176640, 00:46:00.918 "percent": 92 00:46:00.918 } 00:46:00.918 }, 00:46:00.918 "base_bdevs_list": [ 00:46:00.918 { 00:46:00.918 "name": "spare", 00:46:00.918 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:00.918 "is_configured": true, 00:46:00.918 "data_offset": 2048, 00:46:00.918 "data_size": 63488 00:46:00.918 }, 00:46:00.918 { 00:46:00.918 "name": "BaseBdev2", 00:46:00.918 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:00.918 "is_configured": true, 00:46:00.918 "data_offset": 2048, 00:46:00.918 "data_size": 63488 00:46:00.918 }, 00:46:00.918 { 00:46:00.918 "name": "BaseBdev3", 00:46:00.918 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:00.918 "is_configured": true, 00:46:00.918 "data_offset": 2048, 00:46:00.918 "data_size": 63488 00:46:00.918 }, 00:46:00.918 { 00:46:00.918 "name": "BaseBdev4", 00:46:00.918 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:00.918 "is_configured": true, 00:46:00.918 "data_offset": 2048, 00:46:00.918 "data_size": 63488 00:46:00.918 } 00:46:00.918 ] 00:46:00.918 }' 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:46:00.918 16:23:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:46:01.851 [2024-07-22 16:23:05.797879] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:46:01.851 [2024-07-22 16:23:05.798008] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:46:01.851 [2024-07-22 16:23:05.798234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:01.851 16:23:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:46:02.109 "name": "raid_bdev1", 00:46:02.109 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:02.109 "strip_size_kb": 64, 00:46:02.109 "state": "online", 00:46:02.109 "raid_level": "raid5f", 00:46:02.109 "superblock": true, 00:46:02.109 "num_base_bdevs": 4, 00:46:02.109 "num_base_bdevs_discovered": 4, 00:46:02.109 "num_base_bdevs_operational": 4, 00:46:02.109 "base_bdevs_list": [ 00:46:02.109 { 00:46:02.109 "name": "spare", 00:46:02.109 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:02.109 "is_configured": true, 00:46:02.109 "data_offset": 2048, 00:46:02.109 "data_size": 63488 00:46:02.109 }, 00:46:02.109 { 00:46:02.109 "name": "BaseBdev2", 00:46:02.109 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:02.109 "is_configured": true, 00:46:02.109 "data_offset": 2048, 00:46:02.109 "data_size": 63488 00:46:02.109 }, 00:46:02.109 { 00:46:02.109 "name": "BaseBdev3", 00:46:02.109 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:02.109 "is_configured": true, 00:46:02.109 "data_offset": 2048, 00:46:02.109 "data_size": 63488 00:46:02.109 }, 00:46:02.109 { 00:46:02.109 "name": "BaseBdev4", 00:46:02.109 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:02.109 "is_configured": true, 00:46:02.109 "data_offset": 2048, 00:46:02.109 "data_size": 63488 00:46:02.109 } 00:46:02.109 ] 00:46:02.109 }' 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@660 -- # break 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:02.109 16:23:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:02.367 16:23:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:46:02.367 "name": "raid_bdev1", 00:46:02.367 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:02.367 "strip_size_kb": 64, 00:46:02.367 "state": "online", 00:46:02.367 "raid_level": "raid5f", 00:46:02.367 "superblock": true, 00:46:02.367 "num_base_bdevs": 4, 00:46:02.367 "num_base_bdevs_discovered": 4, 00:46:02.367 "num_base_bdevs_operational": 4, 00:46:02.367 "base_bdevs_list": [ 00:46:02.367 { 00:46:02.367 "name": "spare", 00:46:02.367 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:02.367 "is_configured": true, 00:46:02.367 "data_offset": 2048, 00:46:02.367 "data_size": 63488 00:46:02.367 }, 00:46:02.367 { 00:46:02.367 "name": "BaseBdev2", 00:46:02.367 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:02.367 "is_configured": true, 00:46:02.367 "data_offset": 2048, 00:46:02.367 "data_size": 63488 00:46:02.367 }, 00:46:02.367 { 00:46:02.367 "name": "BaseBdev3", 00:46:02.367 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:02.367 "is_configured": true, 00:46:02.368 "data_offset": 2048, 00:46:02.368 "data_size": 63488 00:46:02.368 }, 00:46:02.368 { 00:46:02.368 "name": "BaseBdev4", 00:46:02.368 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:02.368 "is_configured": true, 00:46:02.368 "data_offset": 2048, 00:46:02.368 "data_size": 63488 00:46:02.368 } 00:46:02.368 ] 00:46:02.368 }' 00:46:02.368 16:23:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:02.626 16:23:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:02.883 16:23:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:46:02.883 "name": "raid_bdev1", 00:46:02.883 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:02.883 "strip_size_kb": 64, 00:46:02.883 "state": "online", 00:46:02.883 "raid_level": "raid5f", 00:46:02.883 "superblock": true, 00:46:02.883 "num_base_bdevs": 4, 00:46:02.883 "num_base_bdevs_discovered": 4, 00:46:02.883 "num_base_bdevs_operational": 4, 00:46:02.883 "base_bdevs_list": [ 00:46:02.883 { 00:46:02.883 "name": "spare", 00:46:02.883 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:02.883 "is_configured": true, 00:46:02.883 "data_offset": 2048, 00:46:02.883 "data_size": 63488 00:46:02.883 }, 00:46:02.883 { 00:46:02.883 "name": "BaseBdev2", 00:46:02.883 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:02.883 "is_configured": true, 00:46:02.883 "data_offset": 2048, 00:46:02.883 "data_size": 63488 00:46:02.883 }, 00:46:02.883 { 00:46:02.883 "name": "BaseBdev3", 00:46:02.883 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:02.883 "is_configured": true, 00:46:02.883 "data_offset": 2048, 00:46:02.883 "data_size": 63488 00:46:02.883 }, 00:46:02.883 { 00:46:02.883 "name": "BaseBdev4", 00:46:02.883 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:02.883 "is_configured": true, 00:46:02.884 "data_offset": 2048, 00:46:02.884 "data_size": 63488 00:46:02.884 } 00:46:02.884 ] 00:46:02.884 }' 00:46:02.884 16:23:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:46:02.884 16:23:06 -- common/autotest_common.sh@10 -- # set +x 00:46:03.141 16:23:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:46:03.399 [2024-07-22 16:23:07.486032] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:03.399 [2024-07-22 16:23:07.486098] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:46:03.399 [2024-07-22 16:23:07.486214] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:03.399 [2024-07-22 16:23:07.486365] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:03.399 [2024-07-22 16:23:07.486383] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:46:03.399 16:23:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:03.399 16:23:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:46:03.657 16:23:07 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:46:03.657 16:23:07 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:46:03.657 16:23:07 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@12 -- # local i 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:03.657 16:23:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:46:03.915 /dev/nbd0 00:46:03.915 16:23:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:46:03.915 16:23:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:46:03.915 16:23:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:46:03.915 16:23:08 -- common/autotest_common.sh@857 -- # local i 00:46:03.915 16:23:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:46:03.915 16:23:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:46:03.915 16:23:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:46:03.915 16:23:08 -- common/autotest_common.sh@861 -- # break 00:46:03.915 16:23:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:46:03.915 16:23:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:46:03.915 16:23:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:03.915 1+0 records in 00:46:03.915 1+0 records out 00:46:03.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411134 s, 10.0 MB/s 00:46:03.915 16:23:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:03.915 16:23:08 -- common/autotest_common.sh@874 -- # size=4096 00:46:03.915 16:23:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:03.915 16:23:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:46:03.915 16:23:08 -- common/autotest_common.sh@877 -- # return 0 00:46:03.915 16:23:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:03.915 16:23:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:03.915 16:23:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:46:04.173 /dev/nbd1 00:46:04.173 16:23:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:46:04.173 16:23:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:46:04.173 16:23:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:46:04.173 16:23:08 -- common/autotest_common.sh@857 -- # local i 00:46:04.173 16:23:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:46:04.173 16:23:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:46:04.173 16:23:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:46:04.173 16:23:08 -- common/autotest_common.sh@861 -- # break 00:46:04.173 16:23:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:46:04.173 16:23:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:46:04.173 16:23:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:46:04.173 1+0 records in 00:46:04.173 1+0 records out 00:46:04.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320923 s, 12.8 MB/s 00:46:04.173 16:23:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:04.173 16:23:08 -- common/autotest_common.sh@874 -- # size=4096 00:46:04.173 16:23:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:46:04.173 16:23:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:46:04.173 16:23:08 -- common/autotest_common.sh@877 -- # return 0 00:46:04.173 16:23:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:46:04.173 16:23:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:46:04.173 16:23:08 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:46:04.431 16:23:08 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@51 -- # local i 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:04.432 16:23:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@41 -- # break 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@45 -- # return 0 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:46:04.690 16:23:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@41 -- # break 00:46:04.981 16:23:09 -- bdev/nbd_common.sh@45 -- # return 0 00:46:04.981 16:23:09 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:46:04.981 16:23:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:46:04.981 16:23:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:46:04.981 16:23:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:46:05.262 16:23:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:46:05.520 [2024-07-22 16:23:09.554187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:46:05.520 [2024-07-22 16:23:09.554519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:05.520 [2024-07-22 16:23:09.554578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:46:05.520 [2024-07-22 16:23:09.554597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:05.520 [2024-07-22 16:23:09.557832] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:05.520 [2024-07-22 16:23:09.557878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:46:05.520 [2024-07-22 16:23:09.558157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:46:05.520 BaseBdev1 00:46:05.520 [2024-07-22 16:23:09.558348] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:46:05.520 16:23:09 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:46:05.520 16:23:09 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:46:05.520 16:23:09 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:46:05.778 16:23:09 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:46:06.036 [2024-07-22 16:23:10.102775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:46:06.036 [2024-07-22 16:23:10.103134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:06.036 [2024-07-22 16:23:10.103207] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:46:06.036 [2024-07-22 16:23:10.103227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:06.036 [2024-07-22 16:23:10.103824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:06.036 [2024-07-22 16:23:10.103855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:46:06.036 [2024-07-22 16:23:10.103979] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:46:06.036 [2024-07-22 16:23:10.104018] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:46:06.036 [2024-07-22 16:23:10.104047] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:46:06.036 [2024-07-22 16:23:10.104077] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:46:06.036 [2024-07-22 16:23:10.104175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:46:06.036 BaseBdev2 00:46:06.036 16:23:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:46:06.036 16:23:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:46:06.036 16:23:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:46:06.294 16:23:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:46:06.553 [2024-07-22 16:23:10.614969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:46:06.553 [2024-07-22 16:23:10.615148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:06.553 [2024-07-22 16:23:10.615190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:46:06.553 [2024-07-22 16:23:10.615225] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:06.553 [2024-07-22 16:23:10.615811] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:06.553 [2024-07-22 16:23:10.615858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:46:06.553 [2024-07-22 16:23:10.615973] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:46:06.553 [2024-07-22 16:23:10.616045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:46:06.553 BaseBdev3 00:46:06.553 16:23:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:46:06.553 16:23:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:46:06.553 16:23:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:46:06.811 16:23:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:46:07.070 [2024-07-22 16:23:11.095202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:46:07.070 [2024-07-22 16:23:11.095569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:07.070 [2024-07-22 16:23:11.095728] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:46:07.070 [2024-07-22 16:23:11.095865] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:07.070 [2024-07-22 16:23:11.096630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:07.070 [2024-07-22 16:23:11.096791] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:46:07.070 [2024-07-22 16:23:11.097040] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:46:07.070 [2024-07-22 16:23:11.097195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:46:07.070 BaseBdev4 00:46:07.070 16:23:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:46:07.329 [2024-07-22 16:23:11.563421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:46:07.329 [2024-07-22 16:23:11.563563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:46:07.329 [2024-07-22 16:23:11.563619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:46:07.329 [2024-07-22 16:23:11.563639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:46:07.329 [2024-07-22 16:23:11.564388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:46:07.329 [2024-07-22 16:23:11.564428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:46:07.329 [2024-07-22 16:23:11.564556] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:46:07.329 [2024-07-22 16:23:11.564607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:46:07.329 spare 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:07.329 16:23:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:07.603 [2024-07-22 16:23:11.664781] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:46:07.603 [2024-07-22 16:23:11.664866] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:46:07.603 [2024-07-22 16:23:11.665113] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048a80 00:46:07.603 [2024-07-22 16:23:11.672222] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:46:07.603 [2024-07-22 16:23:11.672249] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:46:07.603 [2024-07-22 16:23:11.672522] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:46:07.603 16:23:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:46:07.603 "name": "raid_bdev1", 00:46:07.603 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:07.603 "strip_size_kb": 64, 00:46:07.603 "state": "online", 00:46:07.603 "raid_level": "raid5f", 00:46:07.603 "superblock": true, 00:46:07.603 "num_base_bdevs": 4, 00:46:07.603 "num_base_bdevs_discovered": 4, 00:46:07.603 "num_base_bdevs_operational": 4, 00:46:07.603 "base_bdevs_list": [ 00:46:07.603 { 00:46:07.603 "name": "spare", 00:46:07.603 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:07.603 "is_configured": true, 00:46:07.603 "data_offset": 2048, 00:46:07.603 "data_size": 63488 00:46:07.603 }, 00:46:07.603 { 00:46:07.603 "name": "BaseBdev2", 00:46:07.603 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:07.603 "is_configured": true, 00:46:07.603 "data_offset": 2048, 00:46:07.603 "data_size": 63488 00:46:07.603 }, 00:46:07.603 { 00:46:07.603 "name": "BaseBdev3", 00:46:07.603 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:07.603 "is_configured": true, 00:46:07.603 "data_offset": 2048, 00:46:07.603 "data_size": 63488 00:46:07.603 }, 00:46:07.603 { 00:46:07.603 "name": "BaseBdev4", 00:46:07.603 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:07.603 "is_configured": true, 00:46:07.603 "data_offset": 2048, 00:46:07.603 "data_size": 63488 00:46:07.603 } 00:46:07.603 ] 00:46:07.603 }' 00:46:07.603 16:23:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:46:07.603 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:46:08.170 16:23:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:46:08.429 "name": "raid_bdev1", 00:46:08.429 "uuid": "5f75d2fc-1227-4803-a7e9-9f2b4bf0dc3f", 00:46:08.429 "strip_size_kb": 64, 00:46:08.429 "state": "online", 00:46:08.429 "raid_level": "raid5f", 00:46:08.429 "superblock": true, 00:46:08.429 "num_base_bdevs": 4, 00:46:08.429 "num_base_bdevs_discovered": 4, 00:46:08.429 "num_base_bdevs_operational": 4, 00:46:08.429 "base_bdevs_list": [ 00:46:08.429 { 00:46:08.429 "name": "spare", 00:46:08.429 "uuid": "715c536b-2d9c-5a11-b848-79160f805b4b", 00:46:08.429 "is_configured": true, 00:46:08.429 "data_offset": 2048, 00:46:08.429 "data_size": 63488 00:46:08.429 }, 00:46:08.429 { 00:46:08.429 "name": "BaseBdev2", 00:46:08.429 "uuid": "09f7d294-afa4-5adb-8e9a-53e64adcbc73", 00:46:08.429 "is_configured": true, 00:46:08.429 "data_offset": 2048, 00:46:08.429 "data_size": 63488 00:46:08.429 }, 00:46:08.429 { 00:46:08.429 "name": "BaseBdev3", 00:46:08.429 "uuid": "74fb51dd-4114-53c5-9c21-4fb5d44a2caa", 00:46:08.429 "is_configured": true, 00:46:08.429 "data_offset": 2048, 00:46:08.429 "data_size": 63488 00:46:08.429 }, 00:46:08.429 { 00:46:08.429 "name": "BaseBdev4", 00:46:08.429 "uuid": "ebfbecd0-348b-5770-aca4-a7f695a2fbf2", 00:46:08.429 "is_configured": true, 00:46:08.429 "data_offset": 2048, 00:46:08.429 "data_size": 63488 00:46:08.429 } 00:46:08.429 ] 00:46:08.429 }' 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:46:08.429 16:23:12 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:46:08.687 16:23:12 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:46:08.687 16:23:12 -- bdev/bdev_raid.sh@709 -- # killprocess 88209 00:46:08.687 16:23:12 -- common/autotest_common.sh@926 -- # '[' -z 88209 ']' 00:46:08.687 16:23:12 -- common/autotest_common.sh@930 -- # kill -0 88209 00:46:08.687 16:23:12 -- common/autotest_common.sh@931 -- # uname 00:46:08.687 16:23:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:08.687 16:23:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88209 00:46:08.687 killing process with pid 88209 00:46:08.687 Received shutdown signal, test time was about 60.000000 seconds 00:46:08.687 00:46:08.687 Latency(us) 00:46:08.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:08.687 =================================================================================================================== 00:46:08.687 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:08.687 16:23:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:08.687 16:23:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:08.687 16:23:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88209' 00:46:08.687 16:23:12 -- common/autotest_common.sh@945 -- # kill 88209 00:46:08.687 [2024-07-22 16:23:12.821308] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:46:08.687 16:23:12 -- common/autotest_common.sh@950 -- # wait 88209 00:46:08.687 [2024-07-22 16:23:12.821450] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:46:08.687 [2024-07-22 16:23:12.821561] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:46:08.687 [2024-07-22 16:23:12.821597] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:46:09.253 [2024-07-22 16:23:13.278084] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:46:10.742 16:23:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:46:10.742 00:46:10.742 real 0m29.269s 00:46:10.742 user 0m42.273s 00:46:10.742 sys 0m4.033s 00:46:10.742 16:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:10.742 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:46:10.742 ************************************ 00:46:10.742 END TEST raid5f_rebuild_test_sb 00:46:10.742 ************************************ 00:46:10.742 16:23:14 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:46:10.742 ************************************ 00:46:10.742 END TEST bdev_raid 00:46:10.742 ************************************ 00:46:10.742 00:46:10.742 real 12m8.772s 00:46:10.742 user 18m38.959s 00:46:10.742 sys 2m1.587s 00:46:10.742 16:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:10.742 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:46:10.742 16:23:14 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:46:10.742 16:23:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:10.742 16:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:10.742 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:46:10.742 ************************************ 00:46:10.742 START TEST bdevperf_config 00:46:10.742 ************************************ 00:46:10.742 16:23:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:46:10.742 * Looking for test storage... 00:46:10.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:46:10.742 16:23:14 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:46:10.742 16:23:14 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:46:10.742 16:23:14 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:46:10.742 16:23:14 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:10.742 16:23:14 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:10.742 16:23:14 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:46:10.742 16:23:14 -- bdevperf/common.sh@8 -- # local job_section=global 00:46:10.742 16:23:14 -- bdevperf/common.sh@9 -- # local rw=read 00:46:10.742 16:23:14 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:46:10.742 16:23:14 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:46:10.743 16:23:14 -- bdevperf/common.sh@13 -- # cat 00:46:10.743 00:46:10.743 16:23:14 -- bdevperf/common.sh@18 -- # job='[global]' 00:46:10.743 16:23:14 -- bdevperf/common.sh@19 -- # echo 00:46:10.743 16:23:14 -- bdevperf/common.sh@20 -- # cat 00:46:10.743 16:23:14 -- bdevperf/test_config.sh@18 -- # create_job job0 00:46:10.743 16:23:14 -- bdevperf/common.sh@8 -- # local job_section=job0 00:46:10.743 16:23:14 -- bdevperf/common.sh@9 -- # local rw= 00:46:10.743 16:23:14 -- bdevperf/common.sh@10 -- # local filename= 00:46:10.743 16:23:14 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:46:10.743 16:23:14 -- bdevperf/common.sh@18 -- # job='[job0]' 00:46:10.743 00:46:10.743 16:23:14 -- bdevperf/common.sh@19 -- # echo 00:46:10.743 16:23:14 -- bdevperf/common.sh@20 -- # cat 00:46:10.743 16:23:14 -- bdevperf/test_config.sh@19 -- # create_job job1 00:46:10.743 16:23:14 -- bdevperf/common.sh@8 -- # local job_section=job1 00:46:10.743 16:23:14 -- bdevperf/common.sh@9 -- # local rw= 00:46:10.743 16:23:14 -- bdevperf/common.sh@10 -- # local filename= 00:46:10.743 16:23:14 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:46:10.743 16:23:14 -- bdevperf/common.sh@18 -- # job='[job1]' 00:46:10.743 00:46:10.743 16:23:14 -- bdevperf/common.sh@19 -- # echo 00:46:10.743 16:23:14 -- bdevperf/common.sh@20 -- # cat 00:46:10.743 16:23:14 -- bdevperf/test_config.sh@20 -- # create_job job2 00:46:10.743 16:23:14 -- bdevperf/common.sh@8 -- # local job_section=job2 00:46:10.743 00:46:10.743 16:23:14 -- bdevperf/common.sh@9 -- # local rw= 00:46:10.743 16:23:14 -- bdevperf/common.sh@10 -- # local filename= 00:46:10.743 16:23:14 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:46:10.743 16:23:14 -- bdevperf/common.sh@18 -- # job='[job2]' 00:46:10.743 16:23:14 -- bdevperf/common.sh@19 -- # echo 00:46:10.743 16:23:14 -- bdevperf/common.sh@20 -- # cat 00:46:10.743 00:46:10.743 16:23:14 -- bdevperf/test_config.sh@21 -- # create_job job3 00:46:10.743 16:23:14 -- bdevperf/common.sh@8 -- # local job_section=job3 00:46:10.743 16:23:14 -- bdevperf/common.sh@9 -- # local rw= 00:46:10.743 16:23:14 -- bdevperf/common.sh@10 -- # local filename= 00:46:10.743 16:23:14 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:46:10.743 16:23:14 -- bdevperf/common.sh@18 -- # job='[job3]' 00:46:10.743 16:23:14 -- bdevperf/common.sh@19 -- # echo 00:46:10.743 16:23:14 -- bdevperf/common.sh@20 -- # cat 00:46:10.743 16:23:14 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:16.009 16:23:19 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-22 16:23:14.868685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:16.009 [2024-07-22 16:23:14.868947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88926 ] 00:46:16.009 Using job config with 4 jobs 00:46:16.009 [2024-07-22 16:23:15.058206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.009 [2024-07-22 16:23:15.392202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:16.009 cpumask for '\''job0'\'' is too big 00:46:16.009 cpumask for '\''job1'\'' is too big 00:46:16.009 cpumask for '\''job2'\'' is too big 00:46:16.009 cpumask for '\''job3'\'' is too big 00:46:16.009 Running I/O for 2 seconds... 00:46:16.009 00:46:16.009 Latency(us) 00:46:16.009 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.009 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.009 Malloc0 : 2.02 22989.20 22.45 0.00 0.00 11124.06 2159.71 18111.77 00:46:16.009 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.009 Malloc0 : 2.02 22968.72 22.43 0.00 0.00 11105.21 2085.24 16086.11 00:46:16.009 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.009 Malloc0 : 2.02 23011.87 22.47 0.00 0.00 11057.46 2055.45 14000.87 00:46:16.009 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.009 Malloc0 : 2.03 22991.46 22.45 0.00 0.00 11040.32 2040.55 13047.62 00:46:16.009 =================================================================================================================== 00:46:16.009 Total : 91961.25 89.81 0.00 0.00 11081.68 2040.55 18111.77' 00:46:16.009 16:23:19 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-22 16:23:14.868685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:16.009 [2024-07-22 16:23:14.868947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88926 ] 00:46:16.009 Using job config with 4 jobs 00:46:16.010 [2024-07-22 16:23:15.058206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.010 [2024-07-22 16:23:15.392202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:16.010 cpumask for '\''job0'\'' is too big 00:46:16.010 cpumask for '\''job1'\'' is too big 00:46:16.010 cpumask for '\''job2'\'' is too big 00:46:16.010 cpumask for '\''job3'\'' is too big 00:46:16.010 Running I/O for 2 seconds... 00:46:16.010 00:46:16.010 Latency(us) 00:46:16.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 22989.20 22.45 0.00 0.00 11124.06 2159.71 18111.77 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 22968.72 22.43 0.00 0.00 11105.21 2085.24 16086.11 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 23011.87 22.47 0.00 0.00 11057.46 2055.45 14000.87 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.03 22991.46 22.45 0.00 0.00 11040.32 2040.55 13047.62 00:46:16.010 =================================================================================================================== 00:46:16.010 Total : 91961.25 89.81 0.00 0.00 11081.68 2040.55 18111.77' 00:46:16.010 16:23:19 -- bdevperf/common.sh@32 -- # echo '[2024-07-22 16:23:14.868685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:16.010 [2024-07-22 16:23:14.868947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88926 ] 00:46:16.010 Using job config with 4 jobs 00:46:16.010 [2024-07-22 16:23:15.058206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.010 [2024-07-22 16:23:15.392202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:16.010 cpumask for '\''job0'\'' is too big 00:46:16.010 cpumask for '\''job1'\'' is too big 00:46:16.010 cpumask for '\''job2'\'' is too big 00:46:16.010 cpumask for '\''job3'\'' is too big 00:46:16.010 Running I/O for 2 seconds... 00:46:16.010 00:46:16.010 Latency(us) 00:46:16.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 22989.20 22.45 0.00 0.00 11124.06 2159.71 18111.77 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 22968.72 22.43 0.00 0.00 11105.21 2085.24 16086.11 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.02 23011.87 22.47 0.00 0.00 11057.46 2055.45 14000.87 00:46:16.010 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:16.010 Malloc0 : 2.03 22991.46 22.45 0.00 0.00 11040.32 2040.55 13047.62 00:46:16.010 =================================================================================================================== 00:46:16.010 Total : 91961.25 89.81 0.00 0.00 11081.68 2040.55 18111.77' 00:46:16.010 16:23:19 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:46:16.010 16:23:19 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:46:16.010 16:23:19 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:46:16.010 16:23:19 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:16.010 [2024-07-22 16:23:19.636133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:16.010 [2024-07-22 16:23:19.636620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88979 ] 00:46:16.010 [2024-07-22 16:23:19.817136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.010 [2024-07-22 16:23:20.139306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:16.602 cpumask for 'job0' is too big 00:46:16.602 cpumask for 'job1' is too big 00:46:16.602 cpumask for 'job2' is too big 00:46:16.602 cpumask for 'job3' is too big 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:46:20.790 Running I/O for 2 seconds... 00:46:20.790 00:46:20.790 Latency(us) 00:46:20.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:20.790 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:20.790 Malloc0 : 2.02 21928.67 21.41 0.00 0.00 11661.32 2249.08 18230.92 00:46:20.790 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:20.790 Malloc0 : 2.02 21908.78 21.40 0.00 0.00 11641.65 2189.50 16086.11 00:46:20.790 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:20.790 Malloc0 : 2.02 21889.56 21.38 0.00 0.00 11622.06 2219.29 16443.58 00:46:20.790 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:46:20.790 Malloc0 : 2.02 21872.49 21.36 0.00 0.00 11602.33 2144.81 16324.42 00:46:20.790 =================================================================================================================== 00:46:20.790 Total : 87599.49 85.55 0.00 0.00 11631.84 2144.81 18230.92' 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@27 -- # cleanup 00:46:20.790 16:23:24 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@8 -- # local job_section=job0 00:46:20.790 16:23:24 -- bdevperf/common.sh@9 -- # local rw=write 00:46:20.790 00:46:20.790 16:23:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:46:20.790 16:23:24 -- bdevperf/common.sh@18 -- # job='[job0]' 00:46:20.790 16:23:24 -- bdevperf/common.sh@19 -- # echo 00:46:20.790 16:23:24 -- bdevperf/common.sh@20 -- # cat 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@8 -- # local job_section=job1 00:46:20.790 16:23:24 -- bdevperf/common.sh@9 -- # local rw=write 00:46:20.790 16:23:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:46:20.790 00:46:20.790 16:23:24 -- bdevperf/common.sh@18 -- # job='[job1]' 00:46:20.790 16:23:24 -- bdevperf/common.sh@19 -- # echo 00:46:20.790 16:23:24 -- bdevperf/common.sh@20 -- # cat 00:46:20.790 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@8 -- # local job_section=job2 00:46:20.790 16:23:24 -- bdevperf/common.sh@9 -- # local rw=write 00:46:20.790 16:23:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:46:20.790 16:23:24 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:46:20.790 16:23:24 -- bdevperf/common.sh@18 -- # job='[job2]' 00:46:20.790 16:23:24 -- bdevperf/common.sh@19 -- # echo 00:46:20.790 16:23:24 -- bdevperf/common.sh@20 -- # cat 00:46:20.790 16:23:24 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:24.994 16:23:28 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-22 16:23:24.374935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:24.994 [2024-07-22 16:23:24.375136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89037 ] 00:46:24.994 Using job config with 3 jobs 00:46:24.994 [2024-07-22 16:23:24.552629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:24.994 [2024-07-22 16:23:24.834430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.994 cpumask for '\''job0'\'' is too big 00:46:24.994 cpumask for '\''job1'\'' is too big 00:46:24.994 cpumask for '\''job2'\'' is too big 00:46:24.994 Running I/O for 2 seconds... 00:46:24.994 00:46:24.994 Latency(us) 00:46:24.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.01 32596.41 31.83 0.00 0.00 7844.36 2204.39 12332.68 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.02 32612.40 31.85 0.00 0.00 7822.01 2025.66 10366.60 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.02 32585.02 31.82 0.00 0.00 7810.84 2040.55 9711.24 00:46:24.994 =================================================================================================================== 00:46:24.994 Total : 97793.84 95.50 0.00 0.00 7825.71 2025.66 12332.68' 00:46:24.994 16:23:28 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-22 16:23:24.374935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:24.994 [2024-07-22 16:23:24.375136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89037 ] 00:46:24.994 Using job config with 3 jobs 00:46:24.994 [2024-07-22 16:23:24.552629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:24.994 [2024-07-22 16:23:24.834430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.994 cpumask for '\''job0'\'' is too big 00:46:24.994 cpumask for '\''job1'\'' is too big 00:46:24.994 cpumask for '\''job2'\'' is too big 00:46:24.994 Running I/O for 2 seconds... 00:46:24.994 00:46:24.994 Latency(us) 00:46:24.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.01 32596.41 31.83 0.00 0.00 7844.36 2204.39 12332.68 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.02 32612.40 31.85 0.00 0.00 7822.01 2025.66 10366.60 00:46:24.994 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.994 Malloc0 : 2.02 32585.02 31.82 0.00 0.00 7810.84 2040.55 9711.24 00:46:24.994 =================================================================================================================== 00:46:24.995 Total : 97793.84 95.50 0.00 0.00 7825.71 2025.66 12332.68' 00:46:24.995 16:23:28 -- bdevperf/common.sh@32 -- # echo '[2024-07-22 16:23:24.374935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:24.995 [2024-07-22 16:23:24.375136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89037 ] 00:46:24.995 Using job config with 3 jobs 00:46:24.995 [2024-07-22 16:23:24.552629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:24.995 [2024-07-22 16:23:24.834430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:24.995 cpumask for '\''job0'\'' is too big 00:46:24.995 cpumask for '\''job1'\'' is too big 00:46:24.995 cpumask for '\''job2'\'' is too big 00:46:24.995 Running I/O for 2 seconds... 00:46:24.995 00:46:24.995 Latency(us) 00:46:24.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:24.995 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.995 Malloc0 : 2.01 32596.41 31.83 0.00 0.00 7844.36 2204.39 12332.68 00:46:24.995 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.995 Malloc0 : 2.02 32612.40 31.85 0.00 0.00 7822.01 2025.66 10366.60 00:46:24.995 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:46:24.995 Malloc0 : 2.02 32585.02 31.82 0.00 0.00 7810.84 2040.55 9711.24 00:46:24.995 =================================================================================================================== 00:46:24.995 Total : 97793.84 95.50 0.00 0.00 7825.71 2025.66 12332.68' 00:46:24.995 16:23:28 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:46:24.995 16:23:28 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@35 -- # cleanup 00:46:24.995 16:23:28 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:46:24.995 16:23:28 -- bdevperf/common.sh@8 -- # local job_section=global 00:46:24.995 16:23:28 -- bdevperf/common.sh@9 -- # local rw=rw 00:46:24.995 16:23:28 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:46:24.995 16:23:28 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:46:24.995 16:23:28 -- bdevperf/common.sh@13 -- # cat 00:46:24.995 00:46:24.995 16:23:28 -- bdevperf/common.sh@18 -- # job='[global]' 00:46:24.995 16:23:28 -- bdevperf/common.sh@19 -- # echo 00:46:24.995 16:23:28 -- bdevperf/common.sh@20 -- # cat 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@38 -- # create_job job0 00:46:24.995 16:23:28 -- bdevperf/common.sh@8 -- # local job_section=job0 00:46:24.995 16:23:28 -- bdevperf/common.sh@9 -- # local rw= 00:46:24.995 16:23:28 -- bdevperf/common.sh@10 -- # local filename= 00:46:24.995 16:23:28 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:46:24.995 16:23:28 -- bdevperf/common.sh@18 -- # job='[job0]' 00:46:24.995 00:46:24.995 16:23:28 -- bdevperf/common.sh@19 -- # echo 00:46:24.995 16:23:28 -- bdevperf/common.sh@20 -- # cat 00:46:24.995 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@39 -- # create_job job1 00:46:24.995 16:23:28 -- bdevperf/common.sh@8 -- # local job_section=job1 00:46:24.995 16:23:28 -- bdevperf/common.sh@9 -- # local rw= 00:46:24.995 16:23:28 -- bdevperf/common.sh@10 -- # local filename= 00:46:24.995 16:23:28 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:46:24.995 16:23:28 -- bdevperf/common.sh@18 -- # job='[job1]' 00:46:24.995 16:23:28 -- bdevperf/common.sh@19 -- # echo 00:46:24.995 16:23:28 -- bdevperf/common.sh@20 -- # cat 00:46:24.995 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@40 -- # create_job job2 00:46:24.995 16:23:28 -- bdevperf/common.sh@8 -- # local job_section=job2 00:46:24.995 16:23:28 -- bdevperf/common.sh@9 -- # local rw= 00:46:24.995 16:23:28 -- bdevperf/common.sh@10 -- # local filename= 00:46:24.995 16:23:28 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:46:24.995 16:23:28 -- bdevperf/common.sh@18 -- # job='[job2]' 00:46:24.995 16:23:28 -- bdevperf/common.sh@19 -- # echo 00:46:24.995 16:23:28 -- bdevperf/common.sh@20 -- # cat 00:46:24.995 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@41 -- # create_job job3 00:46:24.995 16:23:28 -- bdevperf/common.sh@8 -- # local job_section=job3 00:46:24.995 16:23:28 -- bdevperf/common.sh@9 -- # local rw= 00:46:24.995 16:23:28 -- bdevperf/common.sh@10 -- # local filename= 00:46:24.995 16:23:28 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:46:24.995 16:23:28 -- bdevperf/common.sh@18 -- # job='[job3]' 00:46:24.995 16:23:28 -- bdevperf/common.sh@19 -- # echo 00:46:24.995 16:23:28 -- bdevperf/common.sh@20 -- # cat 00:46:24.995 16:23:28 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:30.318 16:23:33 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-22 16:23:29.034071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:30.318 [2024-07-22 16:23:29.034281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89095 ] 00:46:30.318 Using job config with 4 jobs 00:46:30.318 [2024-07-22 16:23:29.212626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.318 [2024-07-22 16:23:29.517466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.318 cpumask for '\''job0'\'' is too big 00:46:30.318 cpumask for '\''job1'\'' is too big 00:46:30.318 cpumask for '\''job2'\'' is too big 00:46:30.318 cpumask for '\''job3'\'' is too big 00:46:30.318 Running I/O for 2 seconds... 00:46:30.318 00:46:30.318 Latency(us) 00:46:30.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.03 11360.40 11.09 0.00 0.00 22513.44 4319.42 36461.85 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.03 11348.57 11.08 0.00 0.00 22513.71 4974.78 36938.47 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.05 11355.62 11.09 0.00 0.00 22415.52 4230.05 33125.47 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.05 11345.27 11.08 0.00 0.00 22412.84 5004.57 33363.78 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.06 11334.75 11.07 0.00 0.00 22349.47 4676.89 28716.68 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.06 11323.98 11.06 0.00 0.00 22342.82 5421.61 28835.84 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.06 11312.96 11.05 0.00 0.00 22275.77 4498.15 26452.71 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.06 11302.84 11.04 0.00 0.00 22272.40 5093.93 26571.87 00:46:30.318 =================================================================================================================== 00:46:30.318 Total : 90684.40 88.56 0.00 0.00 22386.65 4230.05 36938.47' 00:46:30.318 16:23:33 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-22 16:23:29.034071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:30.318 [2024-07-22 16:23:29.034281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89095 ] 00:46:30.318 Using job config with 4 jobs 00:46:30.318 [2024-07-22 16:23:29.212626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.318 [2024-07-22 16:23:29.517466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.318 cpumask for '\''job0'\'' is too big 00:46:30.318 cpumask for '\''job1'\'' is too big 00:46:30.318 cpumask for '\''job2'\'' is too big 00:46:30.318 cpumask for '\''job3'\'' is too big 00:46:30.318 Running I/O for 2 seconds... 00:46:30.318 00:46:30.318 Latency(us) 00:46:30.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.03 11360.40 11.09 0.00 0.00 22513.44 4319.42 36461.85 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.03 11348.57 11.08 0.00 0.00 22513.71 4974.78 36938.47 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.05 11355.62 11.09 0.00 0.00 22415.52 4230.05 33125.47 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.05 11345.27 11.08 0.00 0.00 22412.84 5004.57 33363.78 00:46:30.318 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc0 : 2.06 11334.75 11.07 0.00 0.00 22349.47 4676.89 28716.68 00:46:30.318 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.318 Malloc1 : 2.06 11323.98 11.06 0.00 0.00 22342.82 5421.61 28835.84 00:46:30.319 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc0 : 2.06 11312.96 11.05 0.00 0.00 22275.77 4498.15 26452.71 00:46:30.319 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc1 : 2.06 11302.84 11.04 0.00 0.00 22272.40 5093.93 26571.87 00:46:30.319 =================================================================================================================== 00:46:30.319 Total : 90684.40 88.56 0.00 0.00 22386.65 4230.05 36938.47' 00:46:30.319 16:23:33 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:46:30.319 16:23:33 -- bdevperf/common.sh@32 -- # echo '[2024-07-22 16:23:29.034071] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:30.319 [2024-07-22 16:23:29.034281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89095 ] 00:46:30.319 Using job config with 4 jobs 00:46:30.319 [2024-07-22 16:23:29.212626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:30.319 [2024-07-22 16:23:29.517466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.319 cpumask for '\''job0'\'' is too big 00:46:30.319 cpumask for '\''job1'\'' is too big 00:46:30.319 cpumask for '\''job2'\'' is too big 00:46:30.319 cpumask for '\''job3'\'' is too big 00:46:30.319 Running I/O for 2 seconds... 00:46:30.319 00:46:30.319 Latency(us) 00:46:30.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:30.319 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc0 : 2.03 11360.40 11.09 0.00 0.00 22513.44 4319.42 36461.85 00:46:30.319 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc1 : 2.03 11348.57 11.08 0.00 0.00 22513.71 4974.78 36938.47 00:46:30.319 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc0 : 2.05 11355.62 11.09 0.00 0.00 22415.52 4230.05 33125.47 00:46:30.319 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc1 : 2.05 11345.27 11.08 0.00 0.00 22412.84 5004.57 33363.78 00:46:30.319 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc0 : 2.06 11334.75 11.07 0.00 0.00 22349.47 4676.89 28716.68 00:46:30.319 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc1 : 2.06 11323.98 11.06 0.00 0.00 22342.82 5421.61 28835.84 00:46:30.319 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc0 : 2.06 11312.96 11.05 0.00 0.00 22275.77 4498.15 26452.71 00:46:30.319 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:46:30.319 Malloc1 : 2.06 11302.84 11.04 0.00 0.00 22272.40 5093.93 26571.87 00:46:30.319 =================================================================================================================== 00:46:30.319 Total : 90684.40 88.56 0.00 0.00 22386.65 4230.05 36938.47' 00:46:30.319 16:23:33 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:46:30.319 16:23:33 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:46:30.319 16:23:33 -- bdevperf/test_config.sh@44 -- # cleanup 00:46:30.319 16:23:33 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:46:30.319 16:23:33 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:46:30.319 ************************************ 00:46:30.319 END TEST bdevperf_config 00:46:30.319 ************************************ 00:46:30.319 00:46:30.319 real 0m19.006s 00:46:30.319 user 0m16.792s 00:46:30.319 sys 0m1.730s 00:46:30.319 16:23:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:30.319 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:46:30.319 16:23:33 -- spdk/autotest.sh@198 -- # uname -s 00:46:30.319 16:23:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:46:30.319 16:23:33 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:46:30.319 16:23:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:30.319 16:23:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:30.319 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:46:30.319 ************************************ 00:46:30.319 START TEST reactor_set_interrupt 00:46:30.319 ************************************ 00:46:30.319 16:23:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:46:30.319 * Looking for test storage... 00:46:30.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.319 16:23:33 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:30.319 16:23:33 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:46:30.319 16:23:33 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:46:30.319 16:23:33 -- common/autotest_common.sh@34 -- # set -e 00:46:30.319 16:23:33 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:46:30.319 16:23:33 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:46:30.319 16:23:33 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:46:30.319 16:23:33 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:46:30.319 16:23:33 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:46:30.319 16:23:33 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:46:30.319 16:23:33 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:46:30.319 16:23:33 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:46:30.319 16:23:33 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:46:30.319 16:23:33 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:46:30.319 16:23:33 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:46:30.319 16:23:33 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:46:30.319 16:23:33 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:46:30.319 16:23:33 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:46:30.319 16:23:33 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:46:30.319 16:23:33 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:46:30.319 16:23:33 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:46:30.319 16:23:33 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:46:30.319 16:23:33 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:46:30.319 16:23:33 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:46:30.319 16:23:33 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:30.319 16:23:33 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:46:30.319 16:23:33 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:46:30.319 16:23:33 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:46:30.319 16:23:33 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:46:30.319 16:23:33 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:46:30.319 16:23:33 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:46:30.319 16:23:33 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:46:30.319 16:23:33 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:46:30.319 16:23:33 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:46:30.319 16:23:33 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:46:30.319 16:23:33 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:46:30.319 16:23:33 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:46:30.319 16:23:33 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:46:30.319 16:23:33 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:46:30.319 16:23:33 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:46:30.319 16:23:33 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:46:30.319 16:23:33 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:46:30.319 16:23:33 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:46:30.319 16:23:33 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:46:30.319 16:23:33 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:46:30.319 16:23:33 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:46:30.319 16:23:33 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:46:30.319 16:23:33 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:46:30.319 16:23:33 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:46:30.319 16:23:33 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:46:30.319 16:23:33 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:46:30.319 16:23:33 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:46:30.319 16:23:33 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:46:30.319 16:23:33 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:46:30.319 16:23:33 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:46:30.319 16:23:33 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:46:30.319 16:23:33 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:46:30.319 16:23:33 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:46:30.319 16:23:33 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:46:30.319 16:23:33 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:46:30.319 16:23:33 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:46:30.319 16:23:33 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:46:30.319 16:23:33 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:46:30.320 16:23:33 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:46:30.320 16:23:33 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:46:30.320 16:23:33 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:46:30.320 16:23:33 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:46:30.320 16:23:33 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:46:30.320 16:23:33 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:46:30.320 16:23:33 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:46:30.320 16:23:33 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:46:30.320 16:23:33 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:46:30.320 16:23:33 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:46:30.320 16:23:33 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:46:30.320 16:23:33 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:46:30.320 16:23:33 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:46:30.320 16:23:33 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:46:30.320 16:23:33 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:46:30.320 16:23:33 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:46:30.320 16:23:33 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:46:30.320 16:23:33 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:46:30.320 16:23:33 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:46:30.320 16:23:33 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:30.320 16:23:33 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:30.320 16:23:33 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:46:30.320 16:23:33 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:46:30.320 16:23:33 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:46:30.320 16:23:33 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:46:30.320 16:23:33 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:46:30.320 16:23:33 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:46:30.320 16:23:33 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:46:30.320 16:23:33 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:46:30.320 16:23:33 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:46:30.320 16:23:33 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:46:30.320 16:23:33 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:46:30.320 16:23:33 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:46:30.320 16:23:33 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:46:30.320 16:23:33 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:46:30.320 #define SPDK_CONFIG_H 00:46:30.320 #define SPDK_CONFIG_APPS 1 00:46:30.320 #define SPDK_CONFIG_ARCH native 00:46:30.320 #define SPDK_CONFIG_ASAN 1 00:46:30.320 #undef SPDK_CONFIG_AVAHI 00:46:30.320 #undef SPDK_CONFIG_CET 00:46:30.320 #define SPDK_CONFIG_COVERAGE 1 00:46:30.320 #define SPDK_CONFIG_CROSS_PREFIX 00:46:30.320 #undef SPDK_CONFIG_CRYPTO 00:46:30.320 #undef SPDK_CONFIG_CRYPTO_MLX5 00:46:30.320 #undef SPDK_CONFIG_CUSTOMOCF 00:46:30.320 #undef SPDK_CONFIG_DAOS 00:46:30.320 #define SPDK_CONFIG_DAOS_DIR 00:46:30.320 #define SPDK_CONFIG_DEBUG 1 00:46:30.320 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:46:30.320 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:46:30.320 #define SPDK_CONFIG_DPDK_INC_DIR 00:46:30.320 #define SPDK_CONFIG_DPDK_LIB_DIR 00:46:30.320 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:46:30.320 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:30.320 #define SPDK_CONFIG_EXAMPLES 1 00:46:30.320 #undef SPDK_CONFIG_FC 00:46:30.320 #define SPDK_CONFIG_FC_PATH 00:46:30.320 #define SPDK_CONFIG_FIO_PLUGIN 1 00:46:30.320 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:46:30.320 #undef SPDK_CONFIG_FUSE 00:46:30.320 #undef SPDK_CONFIG_FUZZER 00:46:30.320 #define SPDK_CONFIG_FUZZER_LIB 00:46:30.320 #undef SPDK_CONFIG_GOLANG 00:46:30.320 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:46:30.320 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:46:30.320 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:46:30.320 #undef SPDK_CONFIG_HAVE_LIBBSD 00:46:30.320 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:46:30.320 #define SPDK_CONFIG_IDXD 1 00:46:30.320 #define SPDK_CONFIG_IDXD_KERNEL 1 00:46:30.320 #undef SPDK_CONFIG_IPSEC_MB 00:46:30.320 #define SPDK_CONFIG_IPSEC_MB_DIR 00:46:30.320 #define SPDK_CONFIG_ISAL 1 00:46:30.320 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:46:30.320 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:46:30.320 #define SPDK_CONFIG_LIBDIR 00:46:30.320 #undef SPDK_CONFIG_LTO 00:46:30.320 #define SPDK_CONFIG_MAX_LCORES 00:46:30.320 #define SPDK_CONFIG_NVME_CUSE 1 00:46:30.320 #undef SPDK_CONFIG_OCF 00:46:30.320 #define SPDK_CONFIG_OCF_PATH 00:46:30.320 #define SPDK_CONFIG_OPENSSL_PATH 00:46:30.320 #undef SPDK_CONFIG_PGO_CAPTURE 00:46:30.320 #undef SPDK_CONFIG_PGO_USE 00:46:30.320 #define SPDK_CONFIG_PREFIX /usr/local 00:46:30.320 #define SPDK_CONFIG_RAID5F 1 00:46:30.320 #undef SPDK_CONFIG_RBD 00:46:30.320 #define SPDK_CONFIG_RDMA 1 00:46:30.320 #define SPDK_CONFIG_RDMA_PROV verbs 00:46:30.320 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:46:30.320 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:46:30.320 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:46:30.320 #undef SPDK_CONFIG_SHARED 00:46:30.320 #undef SPDK_CONFIG_SMA 00:46:30.320 #define SPDK_CONFIG_TESTS 1 00:46:30.320 #undef SPDK_CONFIG_TSAN 00:46:30.320 #define SPDK_CONFIG_UBLK 1 00:46:30.320 #define SPDK_CONFIG_UBSAN 1 00:46:30.320 #define SPDK_CONFIG_UNIT_TESTS 1 00:46:30.320 #undef SPDK_CONFIG_URING 00:46:30.320 #define SPDK_CONFIG_URING_PATH 00:46:30.320 #undef SPDK_CONFIG_URING_ZNS 00:46:30.320 #undef SPDK_CONFIG_USDT 00:46:30.320 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:46:30.320 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:46:30.320 #undef SPDK_CONFIG_VFIO_USER 00:46:30.320 #define SPDK_CONFIG_VFIO_USER_DIR 00:46:30.320 #define SPDK_CONFIG_VHOST 1 00:46:30.320 #define SPDK_CONFIG_VIRTIO 1 00:46:30.320 #undef SPDK_CONFIG_VTUNE 00:46:30.320 #define SPDK_CONFIG_VTUNE_DIR 00:46:30.320 #define SPDK_CONFIG_WERROR 1 00:46:30.320 #define SPDK_CONFIG_WPDK_DIR 00:46:30.320 #undef SPDK_CONFIG_XNVME 00:46:30.320 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:46:30.320 16:23:33 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:46:30.320 16:23:33 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:30.320 16:23:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:30.320 16:23:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:30.320 16:23:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:30.320 16:23:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:30.320 16:23:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:30.320 16:23:33 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:30.320 16:23:33 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:30.320 16:23:33 -- paths/export.sh@6 -- # export PATH 00:46:30.320 16:23:33 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:30.320 16:23:33 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:30.320 16:23:33 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:30.320 16:23:33 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:30.320 16:23:33 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:30.320 16:23:33 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:46:30.320 16:23:33 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:46:30.320 16:23:33 -- pm/common@16 -- # TEST_TAG=N/A 00:46:30.320 16:23:33 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:46:30.320 16:23:33 -- common/autotest_common.sh@52 -- # : 1 00:46:30.320 16:23:33 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:46:30.320 16:23:33 -- common/autotest_common.sh@56 -- # : 0 00:46:30.320 16:23:33 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:46:30.320 16:23:33 -- common/autotest_common.sh@58 -- # : 0 00:46:30.320 16:23:33 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:46:30.320 16:23:33 -- common/autotest_common.sh@60 -- # : 1 00:46:30.320 16:23:33 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:46:30.320 16:23:33 -- common/autotest_common.sh@62 -- # : 1 00:46:30.320 16:23:33 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:46:30.320 16:23:33 -- common/autotest_common.sh@64 -- # : 00:46:30.321 16:23:33 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:46:30.321 16:23:33 -- common/autotest_common.sh@66 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:46:30.321 16:23:33 -- common/autotest_common.sh@68 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:46:30.321 16:23:33 -- common/autotest_common.sh@70 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:46:30.321 16:23:33 -- common/autotest_common.sh@72 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:46:30.321 16:23:33 -- common/autotest_common.sh@74 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:46:30.321 16:23:33 -- common/autotest_common.sh@76 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:46:30.321 16:23:33 -- common/autotest_common.sh@78 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:46:30.321 16:23:33 -- common/autotest_common.sh@80 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:46:30.321 16:23:33 -- common/autotest_common.sh@82 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:46:30.321 16:23:33 -- common/autotest_common.sh@84 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:46:30.321 16:23:33 -- common/autotest_common.sh@86 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:46:30.321 16:23:33 -- common/autotest_common.sh@88 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:46:30.321 16:23:33 -- common/autotest_common.sh@90 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:46:30.321 16:23:33 -- common/autotest_common.sh@92 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:46:30.321 16:23:33 -- common/autotest_common.sh@94 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:46:30.321 16:23:33 -- common/autotest_common.sh@96 -- # : rdma 00:46:30.321 16:23:33 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:46:30.321 16:23:33 -- common/autotest_common.sh@98 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:46:30.321 16:23:33 -- common/autotest_common.sh@100 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:46:30.321 16:23:33 -- common/autotest_common.sh@102 -- # : 1 00:46:30.321 16:23:33 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:46:30.321 16:23:33 -- common/autotest_common.sh@104 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:46:30.321 16:23:33 -- common/autotest_common.sh@106 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:46:30.321 16:23:33 -- common/autotest_common.sh@108 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:46:30.321 16:23:33 -- common/autotest_common.sh@110 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:46:30.321 16:23:33 -- common/autotest_common.sh@112 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:46:30.321 16:23:33 -- common/autotest_common.sh@114 -- # : 1 00:46:30.321 16:23:33 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:46:30.321 16:23:33 -- common/autotest_common.sh@116 -- # : 1 00:46:30.321 16:23:33 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:46:30.321 16:23:33 -- common/autotest_common.sh@118 -- # : 00:46:30.321 16:23:33 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:46:30.321 16:23:33 -- common/autotest_common.sh@120 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:46:30.321 16:23:33 -- common/autotest_common.sh@122 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:46:30.321 16:23:33 -- common/autotest_common.sh@124 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:46:30.321 16:23:33 -- common/autotest_common.sh@126 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:46:30.321 16:23:33 -- common/autotest_common.sh@128 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:46:30.321 16:23:33 -- common/autotest_common.sh@130 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:46:30.321 16:23:33 -- common/autotest_common.sh@132 -- # : 00:46:30.321 16:23:33 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:46:30.321 16:23:33 -- common/autotest_common.sh@134 -- # : true 00:46:30.321 16:23:33 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:46:30.321 16:23:33 -- common/autotest_common.sh@136 -- # : 1 00:46:30.321 16:23:33 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:46:30.321 16:23:33 -- common/autotest_common.sh@138 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:46:30.321 16:23:33 -- common/autotest_common.sh@140 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:46:30.321 16:23:33 -- common/autotest_common.sh@142 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:46:30.321 16:23:33 -- common/autotest_common.sh@144 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:46:30.321 16:23:33 -- common/autotest_common.sh@146 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:46:30.321 16:23:33 -- common/autotest_common.sh@148 -- # : 00:46:30.321 16:23:33 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:46:30.321 16:23:33 -- common/autotest_common.sh@150 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:46:30.321 16:23:33 -- common/autotest_common.sh@152 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:46:30.321 16:23:33 -- common/autotest_common.sh@154 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:46:30.321 16:23:33 -- common/autotest_common.sh@156 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:46:30.321 16:23:33 -- common/autotest_common.sh@158 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:46:30.321 16:23:33 -- common/autotest_common.sh@160 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:46:30.321 16:23:33 -- common/autotest_common.sh@163 -- # : 00:46:30.321 16:23:33 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:46:30.321 16:23:33 -- common/autotest_common.sh@165 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:46:30.321 16:23:33 -- common/autotest_common.sh@167 -- # : 0 00:46:30.321 16:23:33 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:46:30.321 16:23:33 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:30.321 16:23:33 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:46:30.321 16:23:33 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:46:30.321 16:23:33 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:30.321 16:23:33 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:30.321 16:23:33 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:46:30.321 16:23:33 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:46:30.321 16:23:33 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:30.321 16:23:33 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:30.321 16:23:33 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:30.321 16:23:33 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:30.321 16:23:33 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:46:30.321 16:23:33 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:46:30.321 16:23:33 -- common/autotest_common.sh@196 -- # cat 00:46:30.321 16:23:33 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:46:30.321 16:23:33 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:30.321 16:23:33 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:30.321 16:23:33 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:30.321 16:23:33 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:30.321 16:23:33 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:46:30.321 16:23:33 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:46:30.322 16:23:33 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:30.322 16:23:33 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:30.322 16:23:33 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:30.322 16:23:33 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:30.322 16:23:33 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:46:30.322 16:23:33 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:46:30.322 16:23:33 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:46:30.322 16:23:33 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:46:30.322 16:23:33 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:30.322 16:23:33 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:30.322 16:23:33 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:30.322 16:23:33 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:30.322 16:23:33 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:46:30.322 16:23:33 -- common/autotest_common.sh@249 -- # export valgrind= 00:46:30.322 16:23:33 -- common/autotest_common.sh@249 -- # valgrind= 00:46:30.322 16:23:33 -- common/autotest_common.sh@255 -- # uname -s 00:46:30.322 16:23:33 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:46:30.322 16:23:33 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:46:30.322 16:23:33 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:46:30.322 16:23:33 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:46:30.322 16:23:33 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@265 -- # MAKE=make 00:46:30.322 16:23:33 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:46:30.322 16:23:33 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:46:30.322 16:23:33 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:46:30.322 16:23:33 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:46:30.322 16:23:33 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:46:30.322 16:23:33 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:46:30.322 16:23:33 -- common/autotest_common.sh@309 -- # [[ -z 89181 ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@309 -- # kill -0 89181 00:46:30.322 16:23:33 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:46:30.322 16:23:33 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:46:30.322 16:23:33 -- common/autotest_common.sh@322 -- # local mount target_dir 00:46:30.322 16:23:33 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:46:30.322 16:23:33 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:46:30.322 16:23:33 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:46:30.322 16:23:33 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:46:30.322 16:23:33 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.ivU7mN 00:46:30.322 16:23:33 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:46:30.322 16:23:33 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ivU7mN/tests/interrupt /tmp/spdk.ivU7mN 00:46:30.322 16:23:33 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@318 -- # df -T 00:46:30.322 16:23:33 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249308672 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=4714496 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=10286370816 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19681529856 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=9378381824 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=6268858368 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6270115840 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda16 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=777306112 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=923156480 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=81207296 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=103000064 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=6395904 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254010880 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt/output 00:46:30.322 16:23:33 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # avails["$mount"]=93535600640 00:46:30.322 16:23:33 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:46:30.322 16:23:33 -- common/autotest_common.sh@354 -- # uses["$mount"]=6167179264 00:46:30.322 16:23:33 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:30.322 16:23:33 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:46:30.322 * Looking for test storage... 00:46:30.322 16:23:33 -- common/autotest_common.sh@359 -- # local target_space new_size 00:46:30.322 16:23:33 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:46:30.322 16:23:33 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:46:30.322 16:23:33 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.322 16:23:33 -- common/autotest_common.sh@363 -- # mount=/ 00:46:30.322 16:23:33 -- common/autotest_common.sh@365 -- # target_space=10286370816 00:46:30.322 16:23:33 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:46:30.322 16:23:33 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:46:30.322 16:23:33 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@372 -- # new_size=11592974336 00:46:30.322 16:23:33 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:46:30.322 16:23:33 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.322 16:23:33 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.322 16:23:33 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:30.322 16:23:33 -- common/autotest_common.sh@380 -- # return 0 00:46:30.322 16:23:33 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:46:30.322 16:23:33 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:46:30.322 16:23:33 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:46:30.322 16:23:33 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:46:30.322 16:23:33 -- common/autotest_common.sh@1672 -- # true 00:46:30.322 16:23:33 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:46:30.322 16:23:33 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:46:30.322 16:23:33 -- common/autotest_common.sh@27 -- # exec 00:46:30.322 16:23:33 -- common/autotest_common.sh@29 -- # exec 00:46:30.322 16:23:33 -- common/autotest_common.sh@31 -- # xtrace_restore 00:46:30.322 16:23:33 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:46:30.322 16:23:33 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:46:30.322 16:23:33 -- common/autotest_common.sh@18 -- # set -x 00:46:30.322 16:23:33 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:30.322 16:23:33 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:46:30.322 16:23:33 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:46:30.322 16:23:33 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:46:30.323 16:23:33 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:46:30.323 16:23:33 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:46:30.323 16:23:33 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=89220 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 89220 /var/tmp/spdk.sock 00:46:30.323 16:23:33 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:46:30.323 16:23:33 -- common/autotest_common.sh@819 -- # '[' -z 89220 ']' 00:46:30.323 16:23:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:30.323 16:23:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:30.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:30.323 16:23:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:30.323 16:23:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:30.323 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:46:30.323 [2024-07-22 16:23:34.029802] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:30.323 [2024-07-22 16:23:34.030114] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89220 ] 00:46:30.323 [2024-07-22 16:23:34.212367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:30.323 [2024-07-22 16:23:34.515422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:30.323 [2024-07-22 16:23:34.515549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.323 [2024-07-22 16:23:34.515573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:30.581 [2024-07-22 16:23:34.842604] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:30.840 16:23:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:30.840 16:23:35 -- common/autotest_common.sh@852 -- # return 0 00:46:30.840 16:23:35 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:46:30.840 16:23:35 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:31.098 Malloc0 00:46:31.098 Malloc1 00:46:31.098 Malloc2 00:46:31.356 16:23:35 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:46:31.357 16:23:35 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:46:31.357 16:23:35 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:46:31.357 16:23:35 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:46:31.357 5000+0 records in 00:46:31.357 5000+0 records out 00:46:31.357 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018698 s, 548 MB/s 00:46:31.357 16:23:35 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:46:31.615 AIO0 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 89220 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 89220 without_thd 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=89220 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:46:31.615 16:23:35 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:46:31.615 16:23:35 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:46:31.874 16:23:35 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:46:31.874 16:23:35 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:46:31.874 16:23:35 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:46:32.132 16:23:36 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:46:32.132 spdk_thread ids are 1 on reactor0. 00:46:32.132 16:23:36 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:46:32.132 16:23:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:32.132 16:23:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89220 0 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89220 0 idle 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:32.132 16:23:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89220 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.93 reactor_0' 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # echo 89220 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.93 reactor_0 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:32.391 16:23:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:32.391 16:23:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89220 1 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89220 1 idle 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89227 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.00 reactor_1' 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # echo 89227 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.00 reactor_1 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:32.391 16:23:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:32.649 16:23:36 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:32.649 16:23:36 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89220 2 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89220 2 idle 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89228 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.00 reactor_2' 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@48 -- # echo 89228 root 20 0 20.1t 148864 29952 S 0.0 1.2 0:00.00 reactor_2 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:32.649 16:23:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:32.649 16:23:36 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:46:32.649 16:23:36 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:46:32.649 16:23:36 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:46:32.906 [2024-07-22 16:23:37.163398] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:33.164 16:23:37 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:46:33.164 [2024-07-22 16:23:37.387191] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:46:33.164 [2024-07-22 16:23:37.388056] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:33.164 16:23:37 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:46:33.422 [2024-07-22 16:23:37.654935] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:46:33.422 [2024-07-22 16:23:37.655787] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:33.422 16:23:37 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:46:33.422 16:23:37 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 89220 0 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 89220 0 busy 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:33.422 16:23:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89220 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.46 reactor_0' 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@48 -- # echo 89220 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:01.46 reactor_0 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:33.686 16:23:37 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:46:33.686 16:23:37 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 89220 2 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 89220 2 busy 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:33.686 16:23:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89228 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.45 reactor_2' 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@48 -- # echo 89228 root 20 0 20.1t 152320 29952 R 99.9 1.2 0:00.45 reactor_2 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:46:33.963 16:23:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:33.963 16:23:38 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:46:34.219 [2024-07-22 16:23:38.382849] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:46:34.219 [2024-07-22 16:23:38.383344] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:34.219 16:23:38 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:46:34.219 16:23:38 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 89220 2 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89220 2 idle 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:34.219 16:23:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89228 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:00.71 reactor_2' 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@48 -- # echo 89228 root 20 0 20.1t 152320 29952 S 0.0 1.2 0:00.71 reactor_2 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:34.475 16:23:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:34.475 16:23:38 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:46:34.731 [2024-07-22 16:23:38.878929] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:46:34.731 [2024-07-22 16:23:38.880027] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:34.732 16:23:38 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:46:34.732 16:23:38 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:46:34.732 16:23:38 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:46:34.989 [2024-07-22 16:23:39.163453] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:34.989 16:23:39 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 89220 0 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89220 0 idle 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@33 -- # local pid=89220 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89220 -w 256 00:46:34.989 16:23:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89220 root 20 0 20.1t 152448 29952 S 0.0 1.2 0:02.45 reactor_0' 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@48 -- # echo 89220 root 20 0 20.1t 152448 29952 S 0.0 1.2 0:02.45 reactor_0 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:35.246 16:23:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:35.246 16:23:39 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:46:35.246 16:23:39 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:46:35.246 16:23:39 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:46:35.246 16:23:39 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 89220 00:46:35.246 16:23:39 -- common/autotest_common.sh@926 -- # '[' -z 89220 ']' 00:46:35.246 16:23:39 -- common/autotest_common.sh@930 -- # kill -0 89220 00:46:35.246 16:23:39 -- common/autotest_common.sh@931 -- # uname 00:46:35.246 16:23:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:35.246 16:23:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89220 00:46:35.246 16:23:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:35.246 killing process with pid 89220 00:46:35.246 16:23:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:35.246 16:23:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89220' 00:46:35.246 16:23:39 -- common/autotest_common.sh@945 -- # kill 89220 00:46:35.246 16:23:39 -- common/autotest_common.sh@950 -- # wait 89220 00:46:37.144 16:23:41 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:46:37.144 16:23:41 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=89374 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:37.144 16:23:41 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 89374 /var/tmp/spdk.sock 00:46:37.144 16:23:41 -- common/autotest_common.sh@819 -- # '[' -z 89374 ']' 00:46:37.144 16:23:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:37.144 16:23:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:37.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:37.144 16:23:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:37.144 16:23:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:37.144 16:23:41 -- common/autotest_common.sh@10 -- # set +x 00:46:37.144 [2024-07-22 16:23:41.139718] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:37.144 [2024-07-22 16:23:41.139946] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89374 ] 00:46:37.144 [2024-07-22 16:23:41.307210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:37.402 [2024-07-22 16:23:41.580406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:37.402 [2024-07-22 16:23:41.580534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:37.402 [2024-07-22 16:23:41.580552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:37.659 [2024-07-22 16:23:41.907537] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:38.231 16:23:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:38.231 16:23:42 -- common/autotest_common.sh@852 -- # return 0 00:46:38.231 16:23:42 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:46:38.231 16:23:42 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:38.491 Malloc0 00:46:38.491 Malloc1 00:46:38.491 Malloc2 00:46:38.492 16:23:42 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:46:38.492 16:23:42 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:46:38.492 16:23:42 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:46:38.492 16:23:42 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:46:38.492 5000+0 records in 00:46:38.492 5000+0 records out 00:46:38.492 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0238718 s, 429 MB/s 00:46:38.492 16:23:42 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:46:38.749 AIO0 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 89374 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 89374 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=89374 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:46:38.749 16:23:42 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:46:38.749 16:23:42 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:46:39.007 16:23:43 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:46:39.007 16:23:43 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:46:39.007 16:23:43 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:46:39.265 16:23:43 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:46:39.265 spdk_thread ids are 1 on reactor0. 00:46:39.265 16:23:43 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:46:39.265 16:23:43 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:39.265 16:23:43 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89374 0 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89374 0 idle 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:39.265 16:23:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89374 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.89 reactor_0' 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@48 -- # echo 89374 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.89 reactor_0 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:39.524 16:23:43 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:39.524 16:23:43 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89374 1 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89374 1 idle 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:39.524 16:23:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89377 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1' 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@48 -- # echo 89377 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:39.804 16:23:43 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:46:39.804 16:23:43 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 89374 2 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89374 2 idle 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:39.804 16:23:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89378 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2' 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@48 -- # echo 89378 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:39.804 16:23:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:39.804 16:23:44 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:46:39.804 16:23:44 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:46:40.063 [2024-07-22 16:23:44.331916] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:46:40.063 [2024-07-22 16:23:44.332279] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:46:40.063 [2024-07-22 16:23:44.332873] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:40.321 16:23:44 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:46:40.578 [2024-07-22 16:23:44.643805] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:46:40.579 [2024-07-22 16:23:44.644227] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:40.579 16:23:44 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:46:40.579 16:23:44 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 89374 0 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 89374 0 busy 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:40.579 16:23:44 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89374 root 20 0 20.1t 152320 30080 R 90.9 1.2 0:01.45 reactor_0' 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@48 -- # echo 89374 root 20 0 20.1t 152320 30080 R 90.9 1.2 0:01.45 reactor_0 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=90.9 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=90 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@51 -- # [[ 90 -lt 70 ]] 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:40.836 16:23:44 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:46:40.836 16:23:44 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 89374 2 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 89374 2 busy 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:40.836 16:23:44 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:40.836 16:23:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89378 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.45 reactor_2' 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@48 -- # echo 89378 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.45 reactor_2 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:46:41.093 16:23:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:41.093 16:23:45 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:46:41.352 [2024-07-22 16:23:45.376044] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:46:41.352 [2024-07-22 16:23:45.376289] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:41.352 16:23:45 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:46:41.352 16:23:45 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 89374 2 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89374 2 idle 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:41.352 16:23:45 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89378 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.72 reactor_2' 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@48 -- # echo 89378 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.72 reactor_2 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:41.353 16:23:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:41.611 16:23:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:41.611 16:23:45 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:46:41.870 [2024-07-22 16:23:45.888272] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:46:41.870 [2024-07-22 16:23:45.889081] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:46:41.870 [2024-07-22 16:23:45.889138] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:46:41.870 16:23:45 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:46:41.870 16:23:45 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 89374 0 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 89374 0 idle 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@33 -- # local pid=89374 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@41 -- # hash top 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 89374 -w 256 00:46:41.870 16:23:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 89374 root 20 0 20.1t 152448 30080 S 0.0 1.2 0:02.46 reactor_0' 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@48 -- # echo 89374 root 20 0 20.1t 152448 30080 S 0.0 1.2 0:02.46 reactor_0 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:46:41.870 16:23:46 -- interrupt/interrupt_common.sh@56 -- # return 0 00:46:41.870 16:23:46 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:46:41.870 16:23:46 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:46:41.870 16:23:46 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:46:41.870 16:23:46 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 89374 00:46:41.870 16:23:46 -- common/autotest_common.sh@926 -- # '[' -z 89374 ']' 00:46:41.870 16:23:46 -- common/autotest_common.sh@930 -- # kill -0 89374 00:46:41.870 16:23:46 -- common/autotest_common.sh@931 -- # uname 00:46:42.130 16:23:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:42.130 16:23:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89374 00:46:42.130 killing process with pid 89374 00:46:42.130 16:23:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:42.130 16:23:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:42.130 16:23:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89374' 00:46:42.130 16:23:46 -- common/autotest_common.sh@945 -- # kill 89374 00:46:42.130 16:23:46 -- common/autotest_common.sh@950 -- # wait 89374 00:46:44.048 16:23:47 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:46:44.048 ************************************ 00:46:44.048 END TEST reactor_set_interrupt 00:46:44.048 ************************************ 00:46:44.048 00:46:44.048 real 0m14.084s 00:46:44.048 user 0m14.371s 00:46:44.048 sys 0m2.130s 00:46:44.048 16:23:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:44.048 16:23:47 -- common/autotest_common.sh@10 -- # set +x 00:46:44.048 16:23:47 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:46:44.048 16:23:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:44.048 16:23:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:44.048 16:23:47 -- common/autotest_common.sh@10 -- # set +x 00:46:44.048 ************************************ 00:46:44.048 START TEST reap_unregistered_poller 00:46:44.048 ************************************ 00:46:44.048 16:23:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:46:44.048 * Looking for test storage... 00:46:44.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.048 16:23:47 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:44.048 16:23:47 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:46:44.048 16:23:47 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:46:44.048 16:23:47 -- common/autotest_common.sh@34 -- # set -e 00:46:44.048 16:23:47 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:46:44.048 16:23:47 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:46:44.048 16:23:47 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:46:44.048 16:23:47 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:46:44.048 16:23:47 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:46:44.048 16:23:47 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:46:44.048 16:23:47 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:46:44.048 16:23:47 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:46:44.048 16:23:47 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:46:44.048 16:23:47 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:46:44.048 16:23:47 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:46:44.048 16:23:47 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:46:44.048 16:23:47 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:46:44.048 16:23:47 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:46:44.048 16:23:47 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:46:44.048 16:23:47 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:46:44.048 16:23:47 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:46:44.048 16:23:47 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:46:44.048 16:23:47 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:46:44.048 16:23:47 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:46:44.048 16:23:47 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:46:44.048 16:23:47 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:46:44.048 16:23:47 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:44.048 16:23:47 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:46:44.048 16:23:47 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:46:44.048 16:23:47 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:46:44.048 16:23:47 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:46:44.048 16:23:47 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:46:44.048 16:23:47 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:46:44.048 16:23:47 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:46:44.048 16:23:47 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:46:44.048 16:23:47 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:46:44.048 16:23:47 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:46:44.048 16:23:47 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:46:44.048 16:23:47 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:46:44.048 16:23:47 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:46:44.048 16:23:47 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:46:44.048 16:23:47 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:46:44.048 16:23:47 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:46:44.048 16:23:47 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:46:44.048 16:23:47 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:46:44.048 16:23:47 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:46:44.048 16:23:47 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:46:44.048 16:23:47 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:46:44.048 16:23:47 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:46:44.048 16:23:47 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:46:44.049 16:23:47 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:46:44.049 16:23:47 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:46:44.049 16:23:47 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:46:44.049 16:23:47 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:46:44.049 16:23:47 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:46:44.049 16:23:47 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:46:44.049 16:23:47 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:46:44.049 16:23:47 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:46:44.049 16:23:47 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:46:44.049 16:23:47 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:46:44.049 16:23:47 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:46:44.049 16:23:47 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:46:44.049 16:23:47 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:46:44.049 16:23:47 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:46:44.049 16:23:47 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:46:44.049 16:23:47 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:46:44.049 16:23:47 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:46:44.049 16:23:47 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:46:44.049 16:23:47 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:46:44.049 16:23:47 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:46:44.049 16:23:47 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:46:44.049 16:23:47 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:46:44.049 16:23:47 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:46:44.049 16:23:47 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:46:44.049 16:23:47 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:46:44.049 16:23:47 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:46:44.049 16:23:47 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:46:44.049 16:23:47 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:46:44.049 16:23:47 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:46:44.049 16:23:47 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:46:44.049 16:23:47 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:46:44.049 16:23:47 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:46:44.049 16:23:47 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:46:44.049 16:23:47 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:46:44.049 16:23:47 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:46:44.049 16:23:47 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:46:44.049 16:23:47 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:46:44.049 16:23:47 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:44.049 16:23:47 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:46:44.049 16:23:47 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:46:44.049 16:23:47 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:46:44.049 16:23:47 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:46:44.049 16:23:47 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:46:44.049 16:23:47 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:46:44.049 16:23:47 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:46:44.049 16:23:47 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:46:44.049 16:23:47 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:46:44.049 16:23:47 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:46:44.049 16:23:47 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:46:44.049 16:23:47 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:46:44.049 16:23:47 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:46:44.049 16:23:47 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:46:44.049 16:23:47 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:46:44.049 #define SPDK_CONFIG_H 00:46:44.049 #define SPDK_CONFIG_APPS 1 00:46:44.049 #define SPDK_CONFIG_ARCH native 00:46:44.049 #define SPDK_CONFIG_ASAN 1 00:46:44.049 #undef SPDK_CONFIG_AVAHI 00:46:44.049 #undef SPDK_CONFIG_CET 00:46:44.049 #define SPDK_CONFIG_COVERAGE 1 00:46:44.049 #define SPDK_CONFIG_CROSS_PREFIX 00:46:44.049 #undef SPDK_CONFIG_CRYPTO 00:46:44.049 #undef SPDK_CONFIG_CRYPTO_MLX5 00:46:44.049 #undef SPDK_CONFIG_CUSTOMOCF 00:46:44.049 #undef SPDK_CONFIG_DAOS 00:46:44.049 #define SPDK_CONFIG_DAOS_DIR 00:46:44.049 #define SPDK_CONFIG_DEBUG 1 00:46:44.049 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:46:44.049 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:46:44.049 #define SPDK_CONFIG_DPDK_INC_DIR 00:46:44.049 #define SPDK_CONFIG_DPDK_LIB_DIR 00:46:44.049 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:46:44.049 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:44.049 #define SPDK_CONFIG_EXAMPLES 1 00:46:44.049 #undef SPDK_CONFIG_FC 00:46:44.049 #define SPDK_CONFIG_FC_PATH 00:46:44.049 #define SPDK_CONFIG_FIO_PLUGIN 1 00:46:44.049 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:46:44.049 #undef SPDK_CONFIG_FUSE 00:46:44.049 #undef SPDK_CONFIG_FUZZER 00:46:44.049 #define SPDK_CONFIG_FUZZER_LIB 00:46:44.049 #undef SPDK_CONFIG_GOLANG 00:46:44.049 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:46:44.049 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:46:44.049 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:46:44.049 #undef SPDK_CONFIG_HAVE_LIBBSD 00:46:44.049 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:46:44.049 #define SPDK_CONFIG_IDXD 1 00:46:44.049 #define SPDK_CONFIG_IDXD_KERNEL 1 00:46:44.049 #undef SPDK_CONFIG_IPSEC_MB 00:46:44.049 #define SPDK_CONFIG_IPSEC_MB_DIR 00:46:44.049 #define SPDK_CONFIG_ISAL 1 00:46:44.049 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:46:44.049 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:46:44.049 #define SPDK_CONFIG_LIBDIR 00:46:44.049 #undef SPDK_CONFIG_LTO 00:46:44.049 #define SPDK_CONFIG_MAX_LCORES 00:46:44.049 #define SPDK_CONFIG_NVME_CUSE 1 00:46:44.049 #undef SPDK_CONFIG_OCF 00:46:44.049 #define SPDK_CONFIG_OCF_PATH 00:46:44.049 #define SPDK_CONFIG_OPENSSL_PATH 00:46:44.049 #undef SPDK_CONFIG_PGO_CAPTURE 00:46:44.049 #undef SPDK_CONFIG_PGO_USE 00:46:44.049 #define SPDK_CONFIG_PREFIX /usr/local 00:46:44.049 #define SPDK_CONFIG_RAID5F 1 00:46:44.049 #undef SPDK_CONFIG_RBD 00:46:44.049 #define SPDK_CONFIG_RDMA 1 00:46:44.049 #define SPDK_CONFIG_RDMA_PROV verbs 00:46:44.049 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:46:44.049 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:46:44.049 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:46:44.049 #undef SPDK_CONFIG_SHARED 00:46:44.049 #undef SPDK_CONFIG_SMA 00:46:44.049 #define SPDK_CONFIG_TESTS 1 00:46:44.049 #undef SPDK_CONFIG_TSAN 00:46:44.049 #define SPDK_CONFIG_UBLK 1 00:46:44.049 #define SPDK_CONFIG_UBSAN 1 00:46:44.049 #define SPDK_CONFIG_UNIT_TESTS 1 00:46:44.049 #undef SPDK_CONFIG_URING 00:46:44.049 #define SPDK_CONFIG_URING_PATH 00:46:44.049 #undef SPDK_CONFIG_URING_ZNS 00:46:44.049 #undef SPDK_CONFIG_USDT 00:46:44.049 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:46:44.049 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:46:44.049 #undef SPDK_CONFIG_VFIO_USER 00:46:44.049 #define SPDK_CONFIG_VFIO_USER_DIR 00:46:44.049 #define SPDK_CONFIG_VHOST 1 00:46:44.049 #define SPDK_CONFIG_VIRTIO 1 00:46:44.049 #undef SPDK_CONFIG_VTUNE 00:46:44.049 #define SPDK_CONFIG_VTUNE_DIR 00:46:44.049 #define SPDK_CONFIG_WERROR 1 00:46:44.049 #define SPDK_CONFIG_WPDK_DIR 00:46:44.049 #undef SPDK_CONFIG_XNVME 00:46:44.049 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:46:44.049 16:23:47 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:46:44.049 16:23:47 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:44.049 16:23:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:44.049 16:23:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:44.049 16:23:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:44.049 16:23:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:44.049 16:23:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:44.049 16:23:48 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:44.049 16:23:48 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:44.049 16:23:48 -- paths/export.sh@6 -- # export PATH 00:46:44.049 16:23:48 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:44.049 16:23:48 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:44.049 16:23:48 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:46:44.049 16:23:48 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:44.050 16:23:48 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:46:44.050 16:23:48 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:46:44.050 16:23:48 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:46:44.050 16:23:48 -- pm/common@16 -- # TEST_TAG=N/A 00:46:44.050 16:23:48 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:46:44.050 16:23:48 -- common/autotest_common.sh@52 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:46:44.050 16:23:48 -- common/autotest_common.sh@56 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:46:44.050 16:23:48 -- common/autotest_common.sh@58 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:46:44.050 16:23:48 -- common/autotest_common.sh@60 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:46:44.050 16:23:48 -- common/autotest_common.sh@62 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:46:44.050 16:23:48 -- common/autotest_common.sh@64 -- # : 00:46:44.050 16:23:48 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:46:44.050 16:23:48 -- common/autotest_common.sh@66 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:46:44.050 16:23:48 -- common/autotest_common.sh@68 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:46:44.050 16:23:48 -- common/autotest_common.sh@70 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:46:44.050 16:23:48 -- common/autotest_common.sh@72 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:46:44.050 16:23:48 -- common/autotest_common.sh@74 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:46:44.050 16:23:48 -- common/autotest_common.sh@76 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:46:44.050 16:23:48 -- common/autotest_common.sh@78 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:46:44.050 16:23:48 -- common/autotest_common.sh@80 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:46:44.050 16:23:48 -- common/autotest_common.sh@82 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:46:44.050 16:23:48 -- common/autotest_common.sh@84 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:46:44.050 16:23:48 -- common/autotest_common.sh@86 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:46:44.050 16:23:48 -- common/autotest_common.sh@88 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:46:44.050 16:23:48 -- common/autotest_common.sh@90 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:46:44.050 16:23:48 -- common/autotest_common.sh@92 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:46:44.050 16:23:48 -- common/autotest_common.sh@94 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:46:44.050 16:23:48 -- common/autotest_common.sh@96 -- # : rdma 00:46:44.050 16:23:48 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:46:44.050 16:23:48 -- common/autotest_common.sh@98 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:46:44.050 16:23:48 -- common/autotest_common.sh@100 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:46:44.050 16:23:48 -- common/autotest_common.sh@102 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:46:44.050 16:23:48 -- common/autotest_common.sh@104 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:46:44.050 16:23:48 -- common/autotest_common.sh@106 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:46:44.050 16:23:48 -- common/autotest_common.sh@108 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:46:44.050 16:23:48 -- common/autotest_common.sh@110 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:46:44.050 16:23:48 -- common/autotest_common.sh@112 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:46:44.050 16:23:48 -- common/autotest_common.sh@114 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:46:44.050 16:23:48 -- common/autotest_common.sh@116 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:46:44.050 16:23:48 -- common/autotest_common.sh@118 -- # : 00:46:44.050 16:23:48 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:46:44.050 16:23:48 -- common/autotest_common.sh@120 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:46:44.050 16:23:48 -- common/autotest_common.sh@122 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:46:44.050 16:23:48 -- common/autotest_common.sh@124 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:46:44.050 16:23:48 -- common/autotest_common.sh@126 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:46:44.050 16:23:48 -- common/autotest_common.sh@128 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:46:44.050 16:23:48 -- common/autotest_common.sh@130 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:46:44.050 16:23:48 -- common/autotest_common.sh@132 -- # : 00:46:44.050 16:23:48 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:46:44.050 16:23:48 -- common/autotest_common.sh@134 -- # : true 00:46:44.050 16:23:48 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:46:44.050 16:23:48 -- common/autotest_common.sh@136 -- # : 1 00:46:44.050 16:23:48 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:46:44.050 16:23:48 -- common/autotest_common.sh@138 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:46:44.050 16:23:48 -- common/autotest_common.sh@140 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:46:44.050 16:23:48 -- common/autotest_common.sh@142 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:46:44.050 16:23:48 -- common/autotest_common.sh@144 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:46:44.050 16:23:48 -- common/autotest_common.sh@146 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:46:44.050 16:23:48 -- common/autotest_common.sh@148 -- # : 00:46:44.050 16:23:48 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:46:44.050 16:23:48 -- common/autotest_common.sh@150 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:46:44.050 16:23:48 -- common/autotest_common.sh@152 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:46:44.050 16:23:48 -- common/autotest_common.sh@154 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:46:44.050 16:23:48 -- common/autotest_common.sh@156 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:46:44.050 16:23:48 -- common/autotest_common.sh@158 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:46:44.050 16:23:48 -- common/autotest_common.sh@160 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:46:44.050 16:23:48 -- common/autotest_common.sh@163 -- # : 00:46:44.050 16:23:48 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:46:44.050 16:23:48 -- common/autotest_common.sh@165 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:46:44.050 16:23:48 -- common/autotest_common.sh@167 -- # : 0 00:46:44.050 16:23:48 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:46:44.050 16:23:48 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:46:44.050 16:23:48 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:46:44.050 16:23:48 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:46:44.050 16:23:48 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:44.050 16:23:48 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:46:44.050 16:23:48 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:46:44.050 16:23:48 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:46:44.050 16:23:48 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:44.050 16:23:48 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:46:44.051 16:23:48 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:44.051 16:23:48 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:46:44.051 16:23:48 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:46:44.051 16:23:48 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:46:44.051 16:23:48 -- common/autotest_common.sh@196 -- # cat 00:46:44.051 16:23:48 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:46:44.051 16:23:48 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:44.051 16:23:48 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:46:44.051 16:23:48 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:44.051 16:23:48 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:46:44.051 16:23:48 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:46:44.051 16:23:48 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:46:44.051 16:23:48 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:44.051 16:23:48 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:46:44.051 16:23:48 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:44.051 16:23:48 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:46:44.051 16:23:48 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:46:44.051 16:23:48 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:46:44.051 16:23:48 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:46:44.051 16:23:48 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:46:44.051 16:23:48 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:44.051 16:23:48 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:46:44.051 16:23:48 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:44.051 16:23:48 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:46:44.051 16:23:48 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:46:44.051 16:23:48 -- common/autotest_common.sh@249 -- # export valgrind= 00:46:44.051 16:23:48 -- common/autotest_common.sh@249 -- # valgrind= 00:46:44.051 16:23:48 -- common/autotest_common.sh@255 -- # uname -s 00:46:44.051 16:23:48 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:46:44.051 16:23:48 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:46:44.051 16:23:48 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:46:44.051 16:23:48 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:46:44.051 16:23:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@265 -- # MAKE=make 00:46:44.051 16:23:48 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:46:44.051 16:23:48 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:46:44.051 16:23:48 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:46:44.051 16:23:48 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:46:44.051 16:23:48 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:46:44.051 16:23:48 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:46:44.051 16:23:48 -- common/autotest_common.sh@309 -- # [[ -z 89551 ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@309 -- # kill -0 89551 00:46:44.051 16:23:48 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:46:44.051 16:23:48 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:46:44.051 16:23:48 -- common/autotest_common.sh@322 -- # local mount target_dir 00:46:44.051 16:23:48 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:46:44.051 16:23:48 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:46:44.051 16:23:48 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:46:44.051 16:23:48 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:46:44.051 16:23:48 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.oiwFq9 00:46:44.051 16:23:48 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:46:44.051 16:23:48 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.oiwFq9/tests/interrupt /tmp/spdk.oiwFq9 00:46:44.051 16:23:48 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@318 -- # df -T 00:46:44.051 16:23:48 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249308672 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4714496 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=10286329856 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19681529856 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=9378422784 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=6268858368 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6270115840 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda16 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=777306112 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=923156480 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=81207296 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=103000064 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=6395904 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254010880 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254023168 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=12288 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest_2/ubuntu2404-libvirt/output 00:46:44.051 16:23:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=93533941760 00:46:44.051 16:23:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:46:44.051 16:23:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=6168838144 00:46:44.051 16:23:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:46:44.051 16:23:48 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:46:44.051 * Looking for test storage... 00:46:44.051 16:23:48 -- common/autotest_common.sh@359 -- # local target_space new_size 00:46:44.051 16:23:48 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:46:44.051 16:23:48 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.051 16:23:48 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:46:44.051 16:23:48 -- common/autotest_common.sh@363 -- # mount=/ 00:46:44.051 16:23:48 -- common/autotest_common.sh@365 -- # target_space=10286329856 00:46:44.051 16:23:48 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:46:44.051 16:23:48 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:46:44.051 16:23:48 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:46:44.051 16:23:48 -- common/autotest_common.sh@372 -- # new_size=11593015296 00:46:44.051 16:23:48 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:46:44.051 16:23:48 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.051 16:23:48 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.051 16:23:48 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:46:44.051 16:23:48 -- common/autotest_common.sh@380 -- # return 0 00:46:44.051 16:23:48 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:46:44.051 16:23:48 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:46:44.051 16:23:48 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:46:44.051 16:23:48 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:46:44.052 16:23:48 -- common/autotest_common.sh@1672 -- # true 00:46:44.052 16:23:48 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:46:44.052 16:23:48 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:46:44.052 16:23:48 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:46:44.052 16:23:48 -- common/autotest_common.sh@27 -- # exec 00:46:44.052 16:23:48 -- common/autotest_common.sh@29 -- # exec 00:46:44.052 16:23:48 -- common/autotest_common.sh@31 -- # xtrace_restore 00:46:44.052 16:23:48 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:46:44.052 16:23:48 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:46:44.052 16:23:48 -- common/autotest_common.sh@18 -- # set -x 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:46:44.052 16:23:48 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:46:44.052 16:23:48 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:46:44.052 16:23:48 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=89590 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:44.052 16:23:48 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 89590 /var/tmp/spdk.sock 00:46:44.052 16:23:48 -- common/autotest_common.sh@819 -- # '[' -z 89590 ']' 00:46:44.052 16:23:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:44.052 16:23:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:46:44.052 16:23:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:44.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:44.052 16:23:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:46:44.052 16:23:48 -- common/autotest_common.sh@10 -- # set +x 00:46:44.052 [2024-07-22 16:23:48.170682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:44.052 [2024-07-22 16:23:48.171187] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89590 ] 00:46:44.309 [2024-07-22 16:23:48.347546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:44.567 [2024-07-22 16:23:48.617175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:44.567 [2024-07-22 16:23:48.617246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:44.567 [2024-07-22 16:23:48.617256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:44.825 [2024-07-22 16:23:48.943563] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:45.085 16:23:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:46:45.085 16:23:49 -- common/autotest_common.sh@852 -- # return 0 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:46:45.085 16:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:45.085 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:46:45.085 16:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:46:45.085 "name": "app_thread", 00:46:45.085 "id": 1, 00:46:45.085 "active_pollers": [], 00:46:45.085 "timed_pollers": [ 00:46:45.085 { 00:46:45.085 "name": "rpc_subsystem_poll", 00:46:45.085 "id": 1, 00:46:45.085 "state": "waiting", 00:46:45.085 "run_count": 0, 00:46:45.085 "busy_count": 0, 00:46:45.085 "period_ticks": 8800000 00:46:45.085 } 00:46:45.085 ], 00:46:45.085 "paused_pollers": [] 00:46:45.085 }' 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:46:45.085 16:23:49 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:46:45.085 16:23:49 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:46:45.085 16:23:49 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:46:45.085 16:23:49 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:46:45.085 5000+0 records in 00:46:45.085 5000+0 records out 00:46:45.085 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0237433 s, 431 MB/s 00:46:45.085 16:23:49 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:46:45.344 AIO0 00:46:45.344 16:23:49 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:46:45.602 16:23:49 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:46:45.602 16:23:49 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:46:45.602 16:23:49 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:46:45.602 16:23:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:46:45.602 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:46:45.602 16:23:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:46:45.602 16:23:49 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:46:45.602 "name": "app_thread", 00:46:45.602 "id": 1, 00:46:45.602 "active_pollers": [], 00:46:45.602 "timed_pollers": [ 00:46:45.602 { 00:46:45.602 "name": "rpc_subsystem_poll", 00:46:45.602 "id": 1, 00:46:45.602 "state": "waiting", 00:46:45.602 "run_count": 0, 00:46:45.602 "busy_count": 0, 00:46:45.602 "period_ticks": 8800000 00:46:45.602 } 00:46:45.602 ], 00:46:45.602 "paused_pollers": [] 00:46:45.602 }' 00:46:45.602 16:23:49 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:46:45.905 16:23:49 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 89590 00:46:45.905 16:23:49 -- common/autotest_common.sh@926 -- # '[' -z 89590 ']' 00:46:45.905 16:23:49 -- common/autotest_common.sh@930 -- # kill -0 89590 00:46:45.905 16:23:49 -- common/autotest_common.sh@931 -- # uname 00:46:45.905 16:23:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:46:45.905 16:23:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89590 00:46:45.905 16:23:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:46:45.905 16:23:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:46:45.905 killing process with pid 89590 00:46:45.905 16:23:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89590' 00:46:45.905 16:23:49 -- common/autotest_common.sh@945 -- # kill 89590 00:46:45.905 16:23:49 -- common/autotest_common.sh@950 -- # wait 89590 00:46:47.310 16:23:51 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:46:47.310 16:23:51 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:46:47.310 00:46:47.310 real 0m3.444s 00:46:47.310 user 0m2.925s 00:46:47.310 sys 0m0.655s 00:46:47.310 16:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:47.310 16:23:51 -- common/autotest_common.sh@10 -- # set +x 00:46:47.310 ************************************ 00:46:47.310 END TEST reap_unregistered_poller 00:46:47.310 ************************************ 00:46:47.310 16:23:51 -- spdk/autotest.sh@204 -- # uname -s 00:46:47.310 16:23:51 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:46:47.310 16:23:51 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:46:47.310 16:23:51 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:46:47.310 16:23:51 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:46:47.310 16:23:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:46:47.310 16:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:47.310 16:23:51 -- common/autotest_common.sh@10 -- # set +x 00:46:47.310 ************************************ 00:46:47.310 START TEST spdk_dd 00:46:47.310 ************************************ 00:46:47.310 16:23:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:46:47.310 * Looking for test storage... 00:46:47.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:47.310 16:23:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:47.310 16:23:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:47.310 16:23:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:47.310 16:23:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:47.310 16:23:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:47.310 16:23:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:47.310 16:23:51 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:47.310 16:23:51 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:47.310 16:23:51 -- paths/export.sh@6 -- # export PATH 00:46:47.310 16:23:51 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:47.310 16:23:51 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:46:47.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:46:47.569 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:46:48.942 16:23:52 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:46:48.942 16:23:52 -- dd/dd.sh@11 -- # nvme_in_userspace 00:46:48.942 16:23:52 -- scripts/common.sh@311 -- # local bdf bdfs 00:46:48.942 16:23:52 -- scripts/common.sh@312 -- # local nvmes 00:46:48.942 16:23:52 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:46:48.942 16:23:52 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:46:48.942 16:23:52 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:46:48.942 16:23:52 -- scripts/common.sh@297 -- # local bdf= 00:46:48.942 16:23:52 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:46:48.942 16:23:52 -- scripts/common.sh@232 -- # local class 00:46:48.942 16:23:52 -- scripts/common.sh@233 -- # local subclass 00:46:48.942 16:23:52 -- scripts/common.sh@234 -- # local progif 00:46:48.942 16:23:52 -- scripts/common.sh@235 -- # printf %02x 1 00:46:48.942 16:23:52 -- scripts/common.sh@235 -- # class=01 00:46:48.942 16:23:52 -- scripts/common.sh@236 -- # printf %02x 8 00:46:48.942 16:23:52 -- scripts/common.sh@236 -- # subclass=08 00:46:48.942 16:23:52 -- scripts/common.sh@237 -- # printf %02x 2 00:46:48.942 16:23:52 -- scripts/common.sh@237 -- # progif=02 00:46:48.942 16:23:52 -- scripts/common.sh@239 -- # hash lspci 00:46:48.942 16:23:52 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:46:48.942 16:23:52 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:46:48.942 16:23:52 -- scripts/common.sh@242 -- # grep -i -- -p02 00:46:48.942 16:23:52 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:46:48.942 16:23:52 -- scripts/common.sh@244 -- # tr -d '"' 00:46:48.942 16:23:52 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:46:48.942 16:23:52 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:46:48.942 16:23:52 -- scripts/common.sh@15 -- # local i 00:46:48.942 16:23:52 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:46:48.942 16:23:52 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:46:48.942 16:23:52 -- scripts/common.sh@24 -- # return 0 00:46:48.942 16:23:52 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:46:48.942 16:23:52 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:46:48.942 16:23:52 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:46:48.942 16:23:52 -- scripts/common.sh@322 -- # uname -s 00:46:48.942 16:23:52 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:46:48.942 16:23:52 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:46:48.942 16:23:52 -- scripts/common.sh@327 -- # (( 1 )) 00:46:48.942 16:23:52 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:46:48.942 16:23:52 -- dd/dd.sh@13 -- # check_liburing 00:46:48.942 16:23:52 -- dd/common.sh@139 -- # local lib so 00:46:48.942 16:23:52 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:46:48.942 16:23:52 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:52 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:46:48.942 16:23:52 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@142 -- # read -r lib _ so _ 00:46:48.942 16:23:53 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:46:48.942 16:23:53 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:46:48.942 * spdk_dd linked to liburing 00:46:48.942 16:23:53 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:46:48.942 16:23:53 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:46:48.942 16:23:53 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:46:48.942 16:23:53 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:46:48.942 16:23:53 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:46:48.942 16:23:53 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:46:48.942 16:23:53 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:46:48.942 16:23:53 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:46:48.942 16:23:53 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:46:48.942 16:23:53 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:46:48.942 16:23:53 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:46:48.942 16:23:53 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:46:48.942 16:23:53 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:46:48.942 16:23:53 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:46:48.942 16:23:53 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:46:48.942 16:23:53 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:46:48.942 16:23:53 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:46:48.942 16:23:53 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:46:48.942 16:23:53 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:46:48.942 16:23:53 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:46:48.942 16:23:53 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:46:48.942 16:23:53 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:46:48.942 16:23:53 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:46:48.942 16:23:53 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:46:48.942 16:23:53 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:46:48.942 16:23:53 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:46:48.942 16:23:53 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:46:48.942 16:23:53 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:46:48.942 16:23:53 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:46:48.942 16:23:53 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:46:48.942 16:23:53 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:46:48.942 16:23:53 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:46:48.942 16:23:53 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:46:48.942 16:23:53 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:46:48.942 16:23:53 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:46:48.942 16:23:53 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:46:48.942 16:23:53 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:46:48.942 16:23:53 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:46:48.942 16:23:53 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:46:48.942 16:23:53 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:46:48.942 16:23:53 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:46:48.942 16:23:53 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:46:48.942 16:23:53 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:46:48.942 16:23:53 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:46:48.942 16:23:53 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:46:48.942 16:23:53 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:46:48.942 16:23:53 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:46:48.942 16:23:53 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:46:48.942 16:23:53 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:46:48.942 16:23:53 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:46:48.942 16:23:53 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:46:48.942 16:23:53 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:46:48.942 16:23:53 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:46:48.942 16:23:53 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:46:48.942 16:23:53 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:46:48.942 16:23:53 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:46:48.942 16:23:53 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:46:48.942 16:23:53 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:46:48.942 16:23:53 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:46:48.942 16:23:53 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:46:48.942 16:23:53 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:46:48.942 16:23:53 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:46:48.942 16:23:53 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:46:48.942 16:23:53 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:46:48.942 16:23:53 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:46:48.942 16:23:53 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:46:48.942 16:23:53 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:46:48.942 16:23:53 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:46:48.942 16:23:53 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:46:48.942 16:23:53 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:46:48.942 16:23:53 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:46:48.942 16:23:53 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:46:48.942 16:23:53 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:46:48.942 16:23:53 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:46:48.942 16:23:53 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:46:48.942 16:23:53 -- dd/common.sh@149 -- # [[ n != y ]] 00:46:48.942 16:23:53 -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:46:48.942 * spdk_dd built with liburing, but no liburing support requested? 00:46:48.942 16:23:53 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:46:48.942 16:23:53 -- dd/common.sh@156 -- # export liburing_in_use=1 00:46:48.942 16:23:53 -- dd/common.sh@156 -- # liburing_in_use=1 00:46:48.942 16:23:53 -- dd/common.sh@157 -- # return 0 00:46:48.942 16:23:53 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:46:48.942 16:23:53 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:46:48.942 16:23:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:46:48.942 16:23:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:48.942 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:46:48.942 ************************************ 00:46:48.942 START TEST spdk_dd_basic_rw 00:46:48.942 ************************************ 00:46:48.942 16:23:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:46:48.942 * Looking for test storage... 00:46:48.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:46:48.942 16:23:53 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:48.942 16:23:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:48.942 16:23:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:48.942 16:23:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:48.942 16:23:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:48.942 16:23:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:48.942 16:23:53 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:48.942 16:23:53 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:48.942 16:23:53 -- paths/export.sh@6 -- # export PATH 00:46:48.942 16:23:53 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:46:48.942 16:23:53 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:46:48.942 16:23:53 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:46:48.942 16:23:53 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:46:48.942 16:23:53 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:46:48.942 16:23:53 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:46:48.943 16:23:53 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:46:48.943 16:23:53 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:46:48.943 16:23:53 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:46:48.943 16:23:53 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:48.943 16:23:53 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:46:48.943 16:23:53 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:46:48.943 16:23:53 -- dd/common.sh@126 -- # mapfile -t id 00:46:48.943 16:23:53 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:46:49.245 16:23:53 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2292 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:46:49.245 16:23:53 -- dd/common.sh@130 -- # lbaf=04 00:46:49.246 16:23:53 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2292 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:46:49.246 16:23:53 -- dd/common.sh@132 -- # lbaf=4096 00:46:49.246 16:23:53 -- dd/common.sh@134 -- # echo 4096 00:46:49.246 16:23:53 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:46:49.246 16:23:53 -- dd/basic_rw.sh@96 -- # : 00:46:49.246 16:23:53 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:46:49.246 16:23:53 -- dd/basic_rw.sh@96 -- # gen_conf 00:46:49.246 16:23:53 -- dd/common.sh@31 -- # xtrace_disable 00:46:49.246 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:46:49.246 16:23:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:46:49.246 16:23:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:49.246 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:46:49.246 ************************************ 00:46:49.246 START TEST dd_bs_lt_native_bs 00:46:49.246 ************************************ 00:46:49.246 { 00:46:49.246 "subsystems": [ 00:46:49.246 { 00:46:49.246 "subsystem": "bdev", 00:46:49.246 "config": [ 00:46:49.246 { 00:46:49.246 "params": { 00:46:49.246 "trtype": "pcie", 00:46:49.246 "traddr": "0000:00:06.0", 00:46:49.246 "name": "Nvme0" 00:46:49.246 }, 00:46:49.246 "method": "bdev_nvme_attach_controller" 00:46:49.246 }, 00:46:49.246 { 00:46:49.246 "method": "bdev_wait_for_examine" 00:46:49.246 } 00:46:49.246 ] 00:46:49.246 } 00:46:49.246 ] 00:46:49.246 } 00:46:49.246 16:23:53 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:46:49.246 16:23:53 -- common/autotest_common.sh@640 -- # local es=0 00:46:49.246 16:23:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:46:49.246 16:23:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:49.246 16:23:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:46:49.246 16:23:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:49.246 16:23:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:46:49.246 16:23:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:49.246 16:23:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:46:49.246 16:23:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:49.246 16:23:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:46:49.246 16:23:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:46:49.246 [2024-07-22 16:23:53.468111] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:49.246 [2024-07-22 16:23:53.468288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89870 ] 00:46:49.503 [2024-07-22 16:23:53.643038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:49.762 [2024-07-22 16:23:53.914734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:50.327 [2024-07-22 16:23:54.318546] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:46:50.327 [2024-07-22 16:23:54.318651] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:46:50.894 [2024-07-22 16:23:54.886390] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:46:51.153 16:23:55 -- common/autotest_common.sh@643 -- # es=234 00:46:51.153 16:23:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:46:51.153 16:23:55 -- common/autotest_common.sh@652 -- # es=106 00:46:51.153 ************************************ 00:46:51.153 END TEST dd_bs_lt_native_bs 00:46:51.153 ************************************ 00:46:51.153 16:23:55 -- common/autotest_common.sh@653 -- # case "$es" in 00:46:51.153 16:23:55 -- common/autotest_common.sh@660 -- # es=1 00:46:51.153 16:23:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:46:51.153 00:46:51.153 real 0m1.986s 00:46:51.153 user 0m1.584s 00:46:51.153 sys 0m0.320s 00:46:51.153 16:23:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:46:51.153 16:23:55 -- common/autotest_common.sh@10 -- # set +x 00:46:51.153 16:23:55 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:46:51.153 16:23:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:46:51.153 16:23:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:46:51.153 16:23:55 -- common/autotest_common.sh@10 -- # set +x 00:46:51.412 ************************************ 00:46:51.412 START TEST dd_rw 00:46:51.412 ************************************ 00:46:51.412 16:23:55 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:46:51.412 16:23:55 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:46:51.412 16:23:55 -- dd/basic_rw.sh@12 -- # local count size 00:46:51.412 16:23:55 -- dd/basic_rw.sh@13 -- # local qds bss 00:46:51.412 16:23:55 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:46:51.412 16:23:55 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:46:51.412 16:23:55 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:46:51.412 16:23:55 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:46:51.412 16:23:55 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:46:51.412 16:23:55 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:46:51.412 16:23:55 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:46:51.412 16:23:55 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:46:51.412 16:23:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:46:51.412 16:23:55 -- dd/basic_rw.sh@23 -- # count=15 00:46:51.412 16:23:55 -- dd/basic_rw.sh@24 -- # count=15 00:46:51.412 16:23:55 -- dd/basic_rw.sh@25 -- # size=61440 00:46:51.412 16:23:55 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:46:51.412 16:23:55 -- dd/common.sh@98 -- # xtrace_disable 00:46:51.412 16:23:55 -- common/autotest_common.sh@10 -- # set +x 00:46:51.979 16:23:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:46:51.979 16:23:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:46:51.979 16:23:56 -- dd/common.sh@31 -- # xtrace_disable 00:46:51.979 16:23:56 -- common/autotest_common.sh@10 -- # set +x 00:46:51.979 { 00:46:51.979 "subsystems": [ 00:46:51.979 { 00:46:51.979 "subsystem": "bdev", 00:46:51.979 "config": [ 00:46:51.979 { 00:46:51.979 "params": { 00:46:51.979 "trtype": "pcie", 00:46:51.979 "traddr": "0000:00:06.0", 00:46:51.979 "name": "Nvme0" 00:46:51.979 }, 00:46:51.979 "method": "bdev_nvme_attach_controller" 00:46:51.979 }, 00:46:51.979 { 00:46:51.979 "method": "bdev_wait_for_examine" 00:46:51.979 } 00:46:51.979 ] 00:46:51.979 } 00:46:51.979 ] 00:46:51.979 } 00:46:51.979 [2024-07-22 16:23:56.125318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:51.979 [2024-07-22 16:23:56.125785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89919 ] 00:46:52.239 [2024-07-22 16:23:56.313019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:52.500 [2024-07-22 16:23:56.582144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:54.134  Copying: 60/60 [kB] (average 19 MBps) 00:46:54.134 00:46:54.134 16:23:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:46:54.134 16:23:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:46:54.134 16:23:58 -- dd/common.sh@31 -- # xtrace_disable 00:46:54.134 16:23:58 -- common/autotest_common.sh@10 -- # set +x 00:46:54.134 { 00:46:54.134 "subsystems": [ 00:46:54.134 { 00:46:54.134 "subsystem": "bdev", 00:46:54.134 "config": [ 00:46:54.134 { 00:46:54.134 "params": { 00:46:54.134 "trtype": "pcie", 00:46:54.134 "traddr": "0000:00:06.0", 00:46:54.134 "name": "Nvme0" 00:46:54.134 }, 00:46:54.134 "method": "bdev_nvme_attach_controller" 00:46:54.134 }, 00:46:54.134 { 00:46:54.134 "method": "bdev_wait_for_examine" 00:46:54.134 } 00:46:54.134 ] 00:46:54.134 } 00:46:54.134 ] 00:46:54.134 } 00:46:54.134 [2024-07-22 16:23:58.377448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:54.134 [2024-07-22 16:23:58.377644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89948 ] 00:46:54.392 [2024-07-22 16:23:58.548659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:54.651 [2024-07-22 16:23:58.806708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:56.148  Copying: 60/60 [kB] (average 19 MBps) 00:46:56.148 00:46:56.149 16:24:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:46:56.149 16:24:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:46:56.149 16:24:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:46:56.149 16:24:00 -- dd/common.sh@11 -- # local nvme_ref= 00:46:56.149 16:24:00 -- dd/common.sh@12 -- # local size=61440 00:46:56.149 16:24:00 -- dd/common.sh@14 -- # local bs=1048576 00:46:56.149 16:24:00 -- dd/common.sh@15 -- # local count=1 00:46:56.149 16:24:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:46:56.149 16:24:00 -- dd/common.sh@18 -- # gen_conf 00:46:56.149 16:24:00 -- dd/common.sh@31 -- # xtrace_disable 00:46:56.149 16:24:00 -- common/autotest_common.sh@10 -- # set +x 00:46:56.149 { 00:46:56.149 "subsystems": [ 00:46:56.149 { 00:46:56.149 "subsystem": "bdev", 00:46:56.149 "config": [ 00:46:56.149 { 00:46:56.149 "params": { 00:46:56.149 "trtype": "pcie", 00:46:56.149 "traddr": "0000:00:06.0", 00:46:56.149 "name": "Nvme0" 00:46:56.149 }, 00:46:56.149 "method": "bdev_nvme_attach_controller" 00:46:56.149 }, 00:46:56.149 { 00:46:56.149 "method": "bdev_wait_for_examine" 00:46:56.149 } 00:46:56.149 ] 00:46:56.149 } 00:46:56.149 ] 00:46:56.149 } 00:46:56.149 [2024-07-22 16:24:00.374189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:56.149 [2024-07-22 16:24:00.374414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89975 ] 00:46:56.407 [2024-07-22 16:24:00.549006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:56.666 [2024-07-22 16:24:00.795971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:58.605  Copying: 1024/1024 [kB] (average 500 MBps) 00:46:58.605 00:46:58.605 16:24:02 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:46:58.605 16:24:02 -- dd/basic_rw.sh@23 -- # count=15 00:46:58.605 16:24:02 -- dd/basic_rw.sh@24 -- # count=15 00:46:58.605 16:24:02 -- dd/basic_rw.sh@25 -- # size=61440 00:46:58.605 16:24:02 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:46:58.605 16:24:02 -- dd/common.sh@98 -- # xtrace_disable 00:46:58.605 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:46:58.869 16:24:03 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:46:58.869 16:24:03 -- dd/basic_rw.sh@30 -- # gen_conf 00:46:58.869 16:24:03 -- dd/common.sh@31 -- # xtrace_disable 00:46:58.869 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:46:59.127 { 00:46:59.127 "subsystems": [ 00:46:59.127 { 00:46:59.127 "subsystem": "bdev", 00:46:59.127 "config": [ 00:46:59.127 { 00:46:59.127 "params": { 00:46:59.127 "trtype": "pcie", 00:46:59.127 "traddr": "0000:00:06.0", 00:46:59.127 "name": "Nvme0" 00:46:59.127 }, 00:46:59.127 "method": "bdev_nvme_attach_controller" 00:46:59.127 }, 00:46:59.127 { 00:46:59.127 "method": "bdev_wait_for_examine" 00:46:59.127 } 00:46:59.127 ] 00:46:59.127 } 00:46:59.127 ] 00:46:59.127 } 00:46:59.127 [2024-07-22 16:24:03.192193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:46:59.127 [2024-07-22 16:24:03.192607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90011 ] 00:46:59.127 [2024-07-22 16:24:03.364221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:59.390 [2024-07-22 16:24:03.621108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:00.890  Copying: 60/60 [kB] (average 58 MBps) 00:47:00.890 00:47:00.890 16:24:05 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:47:00.890 16:24:05 -- dd/basic_rw.sh@37 -- # gen_conf 00:47:00.890 16:24:05 -- dd/common.sh@31 -- # xtrace_disable 00:47:00.890 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:47:00.890 { 00:47:00.890 "subsystems": [ 00:47:00.890 { 00:47:00.890 "subsystem": "bdev", 00:47:00.890 "config": [ 00:47:00.890 { 00:47:00.890 "params": { 00:47:00.890 "trtype": "pcie", 00:47:00.890 "traddr": "0000:00:06.0", 00:47:00.890 "name": "Nvme0" 00:47:00.890 }, 00:47:00.890 "method": "bdev_nvme_attach_controller" 00:47:00.890 }, 00:47:00.890 { 00:47:00.890 "method": "bdev_wait_for_examine" 00:47:00.890 } 00:47:00.890 ] 00:47:00.890 } 00:47:00.890 ] 00:47:00.890 } 00:47:01.148 [2024-07-22 16:24:05.188519] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:01.148 [2024-07-22 16:24:05.188909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90041 ] 00:47:01.148 [2024-07-22 16:24:05.355426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:01.408 [2024-07-22 16:24:05.624707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:03.349  Copying: 60/60 [kB] (average 29 MBps) 00:47:03.349 00:47:03.349 16:24:07 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:03.349 16:24:07 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:47:03.349 16:24:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:03.349 16:24:07 -- dd/common.sh@11 -- # local nvme_ref= 00:47:03.349 16:24:07 -- dd/common.sh@12 -- # local size=61440 00:47:03.349 16:24:07 -- dd/common.sh@14 -- # local bs=1048576 00:47:03.349 16:24:07 -- dd/common.sh@15 -- # local count=1 00:47:03.349 16:24:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:03.349 16:24:07 -- dd/common.sh@18 -- # gen_conf 00:47:03.349 16:24:07 -- dd/common.sh@31 -- # xtrace_disable 00:47:03.349 16:24:07 -- common/autotest_common.sh@10 -- # set +x 00:47:03.349 { 00:47:03.349 "subsystems": [ 00:47:03.349 { 00:47:03.349 "subsystem": "bdev", 00:47:03.349 "config": [ 00:47:03.349 { 00:47:03.349 "params": { 00:47:03.349 "trtype": "pcie", 00:47:03.349 "traddr": "0000:00:06.0", 00:47:03.349 "name": "Nvme0" 00:47:03.349 }, 00:47:03.349 "method": "bdev_nvme_attach_controller" 00:47:03.349 }, 00:47:03.349 { 00:47:03.349 "method": "bdev_wait_for_examine" 00:47:03.349 } 00:47:03.349 ] 00:47:03.349 } 00:47:03.349 ] 00:47:03.349 } 00:47:03.349 [2024-07-22 16:24:07.437223] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:03.349 [2024-07-22 16:24:07.437634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90071 ] 00:47:03.349 [2024-07-22 16:24:07.616549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:03.914 [2024-07-22 16:24:07.894864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:05.544  Copying: 1024/1024 [kB] (average 1000 MBps) 00:47:05.544 00:47:05.544 16:24:09 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:47:05.544 16:24:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:47:05.544 16:24:09 -- dd/basic_rw.sh@23 -- # count=7 00:47:05.544 16:24:09 -- dd/basic_rw.sh@24 -- # count=7 00:47:05.544 16:24:09 -- dd/basic_rw.sh@25 -- # size=57344 00:47:05.544 16:24:09 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:47:05.544 16:24:09 -- dd/common.sh@98 -- # xtrace_disable 00:47:05.544 16:24:09 -- common/autotest_common.sh@10 -- # set +x 00:47:05.802 16:24:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:47:05.802 16:24:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:47:05.802 16:24:10 -- dd/common.sh@31 -- # xtrace_disable 00:47:05.802 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:47:05.802 { 00:47:05.802 "subsystems": [ 00:47:05.802 { 00:47:05.802 "subsystem": "bdev", 00:47:05.802 "config": [ 00:47:05.802 { 00:47:05.802 "params": { 00:47:05.802 "trtype": "pcie", 00:47:05.802 "traddr": "0000:00:06.0", 00:47:05.802 "name": "Nvme0" 00:47:05.802 }, 00:47:05.802 "method": "bdev_nvme_attach_controller" 00:47:05.802 }, 00:47:05.802 { 00:47:05.802 "method": "bdev_wait_for_examine" 00:47:05.802 } 00:47:05.802 ] 00:47:05.802 } 00:47:05.802 ] 00:47:05.802 } 00:47:06.060 [2024-07-22 16:24:10.080877] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:06.060 [2024-07-22 16:24:10.081269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90097 ] 00:47:06.060 [2024-07-22 16:24:10.261908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:06.318 [2024-07-22 16:24:10.519933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:08.288  Copying: 56/56 [kB] (average 27 MBps) 00:47:08.288 00:47:08.288 16:24:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:47:08.288 16:24:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:47:08.288 16:24:12 -- dd/common.sh@31 -- # xtrace_disable 00:47:08.288 16:24:12 -- common/autotest_common.sh@10 -- # set +x 00:47:08.288 { 00:47:08.288 "subsystems": [ 00:47:08.288 { 00:47:08.288 "subsystem": "bdev", 00:47:08.288 "config": [ 00:47:08.288 { 00:47:08.288 "params": { 00:47:08.288 "trtype": "pcie", 00:47:08.288 "traddr": "0000:00:06.0", 00:47:08.288 "name": "Nvme0" 00:47:08.288 }, 00:47:08.288 "method": "bdev_nvme_attach_controller" 00:47:08.288 }, 00:47:08.288 { 00:47:08.288 "method": "bdev_wait_for_examine" 00:47:08.288 } 00:47:08.288 ] 00:47:08.288 } 00:47:08.288 ] 00:47:08.288 } 00:47:08.288 [2024-07-22 16:24:12.279572] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:08.288 [2024-07-22 16:24:12.279732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90134 ] 00:47:08.288 [2024-07-22 16:24:12.449156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:08.546 [2024-07-22 16:24:12.703879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:10.046  Copying: 56/56 [kB] (average 27 MBps) 00:47:10.046 00:47:10.046 16:24:14 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:10.046 16:24:14 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:47:10.046 16:24:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:10.046 16:24:14 -- dd/common.sh@11 -- # local nvme_ref= 00:47:10.046 16:24:14 -- dd/common.sh@12 -- # local size=57344 00:47:10.046 16:24:14 -- dd/common.sh@14 -- # local bs=1048576 00:47:10.046 16:24:14 -- dd/common.sh@15 -- # local count=1 00:47:10.046 16:24:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:10.046 16:24:14 -- dd/common.sh@18 -- # gen_conf 00:47:10.046 16:24:14 -- dd/common.sh@31 -- # xtrace_disable 00:47:10.046 16:24:14 -- common/autotest_common.sh@10 -- # set +x 00:47:10.046 { 00:47:10.046 "subsystems": [ 00:47:10.046 { 00:47:10.046 "subsystem": "bdev", 00:47:10.046 "config": [ 00:47:10.046 { 00:47:10.046 "params": { 00:47:10.046 "trtype": "pcie", 00:47:10.046 "traddr": "0000:00:06.0", 00:47:10.046 "name": "Nvme0" 00:47:10.046 }, 00:47:10.046 "method": "bdev_nvme_attach_controller" 00:47:10.046 }, 00:47:10.046 { 00:47:10.046 "method": "bdev_wait_for_examine" 00:47:10.046 } 00:47:10.047 ] 00:47:10.047 } 00:47:10.047 ] 00:47:10.047 } 00:47:10.047 [2024-07-22 16:24:14.277329] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:10.047 [2024-07-22 16:24:14.277884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90159 ] 00:47:10.318 [2024-07-22 16:24:14.463164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:10.599 [2024-07-22 16:24:14.712814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:12.231  Copying: 1024/1024 [kB] (average 500 MBps) 00:47:12.231 00:47:12.231 16:24:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:47:12.231 16:24:16 -- dd/basic_rw.sh@23 -- # count=7 00:47:12.231 16:24:16 -- dd/basic_rw.sh@24 -- # count=7 00:47:12.231 16:24:16 -- dd/basic_rw.sh@25 -- # size=57344 00:47:12.231 16:24:16 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:47:12.231 16:24:16 -- dd/common.sh@98 -- # xtrace_disable 00:47:12.231 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:47:12.805 16:24:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:47:12.805 16:24:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:47:12.805 16:24:16 -- dd/common.sh@31 -- # xtrace_disable 00:47:12.805 16:24:16 -- common/autotest_common.sh@10 -- # set +x 00:47:12.805 { 00:47:12.805 "subsystems": [ 00:47:12.805 { 00:47:12.805 "subsystem": "bdev", 00:47:12.805 "config": [ 00:47:12.805 { 00:47:12.805 "params": { 00:47:12.805 "trtype": "pcie", 00:47:12.805 "traddr": "0000:00:06.0", 00:47:12.805 "name": "Nvme0" 00:47:12.805 }, 00:47:12.805 "method": "bdev_nvme_attach_controller" 00:47:12.805 }, 00:47:12.805 { 00:47:12.805 "method": "bdev_wait_for_examine" 00:47:12.805 } 00:47:12.805 ] 00:47:12.805 } 00:47:12.805 ] 00:47:12.805 } 00:47:12.805 [2024-07-22 16:24:17.041712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:12.805 [2024-07-22 16:24:17.041912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90195 ] 00:47:13.108 [2024-07-22 16:24:17.219363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:13.366 [2024-07-22 16:24:17.502947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:15.007  Copying: 56/56 [kB] (average 54 MBps) 00:47:15.007 00:47:15.007 16:24:18 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:47:15.007 16:24:18 -- dd/basic_rw.sh@37 -- # gen_conf 00:47:15.007 16:24:18 -- dd/common.sh@31 -- # xtrace_disable 00:47:15.007 16:24:18 -- common/autotest_common.sh@10 -- # set +x 00:47:15.007 { 00:47:15.007 "subsystems": [ 00:47:15.007 { 00:47:15.007 "subsystem": "bdev", 00:47:15.007 "config": [ 00:47:15.007 { 00:47:15.007 "params": { 00:47:15.007 "trtype": "pcie", 00:47:15.007 "traddr": "0000:00:06.0", 00:47:15.007 "name": "Nvme0" 00:47:15.007 }, 00:47:15.007 "method": "bdev_nvme_attach_controller" 00:47:15.007 }, 00:47:15.007 { 00:47:15.007 "method": "bdev_wait_for_examine" 00:47:15.007 } 00:47:15.007 ] 00:47:15.007 } 00:47:15.007 ] 00:47:15.007 } 00:47:15.007 [2024-07-22 16:24:19.019133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:15.007 [2024-07-22 16:24:19.019627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90220 ] 00:47:15.007 [2024-07-22 16:24:19.196160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:15.264 [2024-07-22 16:24:19.453617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:17.201  Copying: 56/56 [kB] (average 54 MBps) 00:47:17.201 00:47:17.201 16:24:21 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:17.201 16:24:21 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:47:17.201 16:24:21 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:17.201 16:24:21 -- dd/common.sh@11 -- # local nvme_ref= 00:47:17.201 16:24:21 -- dd/common.sh@12 -- # local size=57344 00:47:17.201 16:24:21 -- dd/common.sh@14 -- # local bs=1048576 00:47:17.201 16:24:21 -- dd/common.sh@15 -- # local count=1 00:47:17.201 16:24:21 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:17.201 16:24:21 -- dd/common.sh@18 -- # gen_conf 00:47:17.201 16:24:21 -- dd/common.sh@31 -- # xtrace_disable 00:47:17.201 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:47:17.201 { 00:47:17.201 "subsystems": [ 00:47:17.201 { 00:47:17.201 "subsystem": "bdev", 00:47:17.201 "config": [ 00:47:17.201 { 00:47:17.201 "params": { 00:47:17.201 "trtype": "pcie", 00:47:17.201 "traddr": "0000:00:06.0", 00:47:17.201 "name": "Nvme0" 00:47:17.201 }, 00:47:17.201 "method": "bdev_nvme_attach_controller" 00:47:17.201 }, 00:47:17.201 { 00:47:17.201 "method": "bdev_wait_for_examine" 00:47:17.201 } 00:47:17.201 ] 00:47:17.201 } 00:47:17.201 ] 00:47:17.201 } 00:47:17.201 [2024-07-22 16:24:21.216136] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:17.201 [2024-07-22 16:24:21.216293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90251 ] 00:47:17.201 [2024-07-22 16:24:21.385521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:17.460 [2024-07-22 16:24:21.644618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:18.957  Copying: 1024/1024 [kB] (average 1000 MBps) 00:47:18.957 00:47:18.957 16:24:23 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:47:18.957 16:24:23 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:47:18.957 16:24:23 -- dd/basic_rw.sh@23 -- # count=3 00:47:18.957 16:24:23 -- dd/basic_rw.sh@24 -- # count=3 00:47:18.957 16:24:23 -- dd/basic_rw.sh@25 -- # size=49152 00:47:18.957 16:24:23 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:47:18.957 16:24:23 -- dd/common.sh@98 -- # xtrace_disable 00:47:18.957 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:47:19.522 16:24:23 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:47:19.522 16:24:23 -- dd/basic_rw.sh@30 -- # gen_conf 00:47:19.522 16:24:23 -- dd/common.sh@31 -- # xtrace_disable 00:47:19.522 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:47:19.522 { 00:47:19.522 "subsystems": [ 00:47:19.522 { 00:47:19.522 "subsystem": "bdev", 00:47:19.522 "config": [ 00:47:19.522 { 00:47:19.522 "params": { 00:47:19.522 "trtype": "pcie", 00:47:19.522 "traddr": "0000:00:06.0", 00:47:19.522 "name": "Nvme0" 00:47:19.522 }, 00:47:19.523 "method": "bdev_nvme_attach_controller" 00:47:19.523 }, 00:47:19.523 { 00:47:19.523 "method": "bdev_wait_for_examine" 00:47:19.523 } 00:47:19.523 ] 00:47:19.523 } 00:47:19.523 ] 00:47:19.523 } 00:47:19.523 [2024-07-22 16:24:23.705954] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:19.523 [2024-07-22 16:24:23.706340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90287 ] 00:47:19.781 [2024-07-22 16:24:23.873580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:20.039 [2024-07-22 16:24:24.121220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:21.674  Copying: 48/48 [kB] (average 46 MBps) 00:47:21.674 00:47:21.674 16:24:25 -- dd/basic_rw.sh@37 -- # gen_conf 00:47:21.674 16:24:25 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:47:21.674 16:24:25 -- dd/common.sh@31 -- # xtrace_disable 00:47:21.674 16:24:25 -- common/autotest_common.sh@10 -- # set +x 00:47:21.674 { 00:47:21.674 "subsystems": [ 00:47:21.674 { 00:47:21.674 "subsystem": "bdev", 00:47:21.674 "config": [ 00:47:21.674 { 00:47:21.674 "params": { 00:47:21.674 "trtype": "pcie", 00:47:21.674 "traddr": "0000:00:06.0", 00:47:21.674 "name": "Nvme0" 00:47:21.674 }, 00:47:21.674 "method": "bdev_nvme_attach_controller" 00:47:21.674 }, 00:47:21.674 { 00:47:21.674 "method": "bdev_wait_for_examine" 00:47:21.674 } 00:47:21.674 ] 00:47:21.674 } 00:47:21.674 ] 00:47:21.674 } 00:47:21.674 [2024-07-22 16:24:25.889331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:21.674 [2024-07-22 16:24:25.889507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90316 ] 00:47:21.932 [2024-07-22 16:24:26.068785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:22.190 [2024-07-22 16:24:26.317630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:23.731  Copying: 48/48 [kB] (average 46 MBps) 00:47:23.731 00:47:23.731 16:24:27 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:23.731 16:24:27 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:47:23.731 16:24:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:23.731 16:24:27 -- dd/common.sh@11 -- # local nvme_ref= 00:47:23.731 16:24:27 -- dd/common.sh@12 -- # local size=49152 00:47:23.731 16:24:27 -- dd/common.sh@14 -- # local bs=1048576 00:47:23.731 16:24:27 -- dd/common.sh@15 -- # local count=1 00:47:23.731 16:24:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:23.731 16:24:27 -- dd/common.sh@18 -- # gen_conf 00:47:23.731 16:24:27 -- dd/common.sh@31 -- # xtrace_disable 00:47:23.731 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:47:23.731 { 00:47:23.731 "subsystems": [ 00:47:23.731 { 00:47:23.731 "subsystem": "bdev", 00:47:23.731 "config": [ 00:47:23.731 { 00:47:23.731 "params": { 00:47:23.731 "trtype": "pcie", 00:47:23.731 "traddr": "0000:00:06.0", 00:47:23.731 "name": "Nvme0" 00:47:23.731 }, 00:47:23.731 "method": "bdev_nvme_attach_controller" 00:47:23.731 }, 00:47:23.731 { 00:47:23.731 "method": "bdev_wait_for_examine" 00:47:23.731 } 00:47:23.731 ] 00:47:23.731 } 00:47:23.731 ] 00:47:23.731 } 00:47:23.731 [2024-07-22 16:24:27.961410] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:23.731 [2024-07-22 16:24:27.961561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90343 ] 00:47:23.989 [2024-07-22 16:24:28.128892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:24.247 [2024-07-22 16:24:28.386978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:26.185  Copying: 1024/1024 [kB] (average 1000 MBps) 00:47:26.186 00:47:26.186 16:24:30 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:47:26.186 16:24:30 -- dd/basic_rw.sh@23 -- # count=3 00:47:26.186 16:24:30 -- dd/basic_rw.sh@24 -- # count=3 00:47:26.186 16:24:30 -- dd/basic_rw.sh@25 -- # size=49152 00:47:26.186 16:24:30 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:47:26.186 16:24:30 -- dd/common.sh@98 -- # xtrace_disable 00:47:26.186 16:24:30 -- common/autotest_common.sh@10 -- # set +x 00:47:26.443 16:24:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:47:26.443 16:24:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:47:26.443 16:24:30 -- dd/common.sh@31 -- # xtrace_disable 00:47:26.443 16:24:30 -- common/autotest_common.sh@10 -- # set +x 00:47:26.443 { 00:47:26.443 "subsystems": [ 00:47:26.443 { 00:47:26.443 "subsystem": "bdev", 00:47:26.443 "config": [ 00:47:26.443 { 00:47:26.443 "params": { 00:47:26.443 "trtype": "pcie", 00:47:26.443 "traddr": "0000:00:06.0", 00:47:26.443 "name": "Nvme0" 00:47:26.443 }, 00:47:26.443 "method": "bdev_nvme_attach_controller" 00:47:26.443 }, 00:47:26.443 { 00:47:26.443 "method": "bdev_wait_for_examine" 00:47:26.443 } 00:47:26.443 ] 00:47:26.443 } 00:47:26.444 ] 00:47:26.444 } 00:47:26.444 [2024-07-22 16:24:30.670805] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:26.444 [2024-07-22 16:24:30.671268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90379 ] 00:47:26.702 [2024-07-22 16:24:30.843351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:26.960 [2024-07-22 16:24:31.148890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:28.462  Copying: 48/48 [kB] (average 46 MBps) 00:47:28.462 00:47:28.462 16:24:32 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:47:28.462 16:24:32 -- dd/basic_rw.sh@37 -- # gen_conf 00:47:28.462 16:24:32 -- dd/common.sh@31 -- # xtrace_disable 00:47:28.462 16:24:32 -- common/autotest_common.sh@10 -- # set +x 00:47:28.462 { 00:47:28.462 "subsystems": [ 00:47:28.462 { 00:47:28.462 "subsystem": "bdev", 00:47:28.462 "config": [ 00:47:28.462 { 00:47:28.462 "params": { 00:47:28.462 "trtype": "pcie", 00:47:28.462 "traddr": "0000:00:06.0", 00:47:28.462 "name": "Nvme0" 00:47:28.462 }, 00:47:28.462 "method": "bdev_nvme_attach_controller" 00:47:28.462 }, 00:47:28.462 { 00:47:28.462 "method": "bdev_wait_for_examine" 00:47:28.462 } 00:47:28.462 ] 00:47:28.462 } 00:47:28.462 ] 00:47:28.462 } 00:47:28.719 [2024-07-22 16:24:32.736936] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:28.719 [2024-07-22 16:24:32.737243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90408 ] 00:47:28.719 [2024-07-22 16:24:32.923040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:28.977 [2024-07-22 16:24:33.213136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:30.947  Copying: 48/48 [kB] (average 46 MBps) 00:47:30.947 00:47:30.947 16:24:34 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:30.947 16:24:34 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:47:30.947 16:24:34 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:30.947 16:24:34 -- dd/common.sh@11 -- # local nvme_ref= 00:47:30.947 16:24:34 -- dd/common.sh@12 -- # local size=49152 00:47:30.947 16:24:34 -- dd/common.sh@14 -- # local bs=1048576 00:47:30.947 16:24:34 -- dd/common.sh@15 -- # local count=1 00:47:30.947 16:24:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:30.947 16:24:34 -- dd/common.sh@18 -- # gen_conf 00:47:30.947 16:24:34 -- dd/common.sh@31 -- # xtrace_disable 00:47:30.947 16:24:34 -- common/autotest_common.sh@10 -- # set +x 00:47:30.947 { 00:47:30.947 "subsystems": [ 00:47:30.947 { 00:47:30.947 "subsystem": "bdev", 00:47:30.947 "config": [ 00:47:30.947 { 00:47:30.947 "params": { 00:47:30.947 "trtype": "pcie", 00:47:30.947 "traddr": "0000:00:06.0", 00:47:30.947 "name": "Nvme0" 00:47:30.947 }, 00:47:30.947 "method": "bdev_nvme_attach_controller" 00:47:30.947 }, 00:47:30.947 { 00:47:30.947 "method": "bdev_wait_for_examine" 00:47:30.947 } 00:47:30.947 ] 00:47:30.947 } 00:47:30.947 ] 00:47:30.947 } 00:47:30.947 [2024-07-22 16:24:34.999786] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:30.947 [2024-07-22 16:24:35.000018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90435 ] 00:47:30.947 [2024-07-22 16:24:35.180113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:31.205 [2024-07-22 16:24:35.432004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:33.146  Copying: 1024/1024 [kB] (average 500 MBps) 00:47:33.146 00:47:33.146 00:47:33.146 real 0m41.591s 00:47:33.146 user 0m33.717s 00:47:33.146 sys 0m6.193s 00:47:33.146 16:24:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:33.146 ************************************ 00:47:33.146 END TEST dd_rw 00:47:33.146 ************************************ 00:47:33.146 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:47:33.146 16:24:37 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:47:33.146 16:24:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:33.146 16:24:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:33.146 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:47:33.146 ************************************ 00:47:33.146 START TEST dd_rw_offset 00:47:33.146 ************************************ 00:47:33.146 16:24:37 -- common/autotest_common.sh@1104 -- # basic_offset 00:47:33.146 16:24:37 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:47:33.146 16:24:37 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:47:33.146 16:24:37 -- dd/common.sh@98 -- # xtrace_disable 00:47:33.146 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:47:33.146 16:24:37 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:47:33.146 16:24:37 -- dd/basic_rw.sh@56 -- # data=jijwwm99z8ld0wj5o420e1se67s9ab4jgprujwqdy08wg5qryphwasx6yzz41m2zi3y1zlnbs2mvkftz8ouxl0cd3256s7iurk9mgjq09yh4rvo048i8cwb9878pa37vg715c2x2uwnpx5nsoksp7aigigytdnh85f04wdg2ikwwop1zoiw7jfwaq50srq0uifikkdlcj1c0f5jmtevqrgw3wjx5j3bfog8d5fx9o1y59dj7817r5adrxbvhqehyvric9i8c2vphhe5nyv4o7u69d7fq6aoupf4ikl4ilf5su4oz43iexwpbs2pdy8pu0db5cyo2rqawbns5v1y2v12ynjwueaj5gyrwu3zas3xhoflr0kb4wy1mepi1t1jz1e7pwivfpvyfz036pgyfq4od5t962poqc0b82lr0zan7sdh87qix07j11lvj6sl6f0pmz3nddgaf1t2zk0pqwl7scoc5alacu7f7zh5y3kobmi594dehem5z7gtmeobnm88gqdcytgqcq59vmbfrm14a3fcm1c8qihu5dph2zp7tpvquf5o6y0v9fy6xgggyrn3vnfkf47t9c3ys5rp3dynqhgq7pm8007tmb84z85tvfur5qfda1v059m8zik4d9j23mbsrb5sl6jbmy05qxpulxx18z7bietspuldc2kqj3mery4ie3xkq8559p9z2h8rqh7y5bu9dg1qfs0ja6nfvz3i0xyw22w3yongn2cbmg2ks7ah6r4msha7fyg6zeek3zdurburw1cdmfqmgl441wayjn54l7665r610whai87dgd17t31q92kuo05n6lrvhivsdjornaov7f1m0fbc3u3kbn6cdv15rmjjbb05fzk72zjlz4c3g7dbi2p5gsx42iwsuqcnz2e6w9xo7f9f6xgk1f5xepmdv3m3ulhdx4zii4nfu1bcldrzdfb4dtfemce9ngxcrj6wpo6obbm4ra8wnk5u3z2uyhuxmef33jqmynthgt7rpbodojr287g1hxan8r58xuvw64fmy9hktfff1bas31ydy8iapmiuv26fadoctcl6dq50qn8gibd9erw7o9e7bgkmik1t3crqk7m3c8dn15ed3d40p1i05ssduaaqd30xrc5e4jzsh9xpe2mkf43bf8ujok37o50bxz8g39d2ihr2z66apnm80ulsw1isxwnaso1royvgo88p0wz5mlngst6sblkjmzptxidufe10cih2raeaxisuprov5vqvegbmz0bgdaynidzsd1dmpd5sc1308o5bdu9j4i5zcvycoi1lvrterydd87q7rh1nbqba27july2px32pxw38qcmw3e0uy7r2x8d629g2cjd2rskww6yypvnyqmas3v9h86pji189iuxajb28zckxz65014rjz6mcwx0ae34rabewgr6d0ri00ngfebmyfb9ewsrkltvbxqskpvq4fii1icm84hn96za32wsxipa3exdjnb6khp21otv6i6dpxfiyh87sbo6semfun11nlbqc9lvu0l29hjfe9mjcfftp0q76lt3obopmva9ylyh3dcjm7tjiwenfeylz0yhribn3lluv0qtzebt5y22xdz7b6lrk9qncqsc4qrc4wyzf0cidooce29w9z5ezlrfalavkrfahgygbhjh40ergifct9y6yx8g0pg4w91nxme5hrvqw9biiox5qj1eidsfvpyb5ehh6kz6ezn8zd0urtus4h9jmhhzv27iwz3maveaadewh7uu5fghinpz3sxp427vs6enrp52lcnmdnp7dg6qjbmna9m27ug7u0nopo9uu7x7rbqb5uyyxlbhby40qkeedo95i02v5dmtt3vubki2yez693qp7nvse7j97c6gfx2ln09jc1t4l3wbt69rjk0os2l65ky2vehhkzyfkyv3ovpp5x7166vkuye4fygdf2li2eczheeq63vhevjh2iygvsz0l8pw84mifdlstk6gw40t3besqzh1fk1mm73kaqtrqbroc4y4ez5l3a6mtoeddonkft3j0hnglh9h73c3x65d3n9pk91rmbet1sdobmwjvqm6dmafnpd3f9tt6701on71gazzly1xe8wje36mrdaxnwufeodq6ae9y7ht6lqzzac86ts7neyk4fy51vg8xp6ctsay4dscmbqrncxa9ajcizj9xsy3mexde2y3qtb3lwkk6c1yeq9jg6cpso530ylti5m0i1svm4urzsli0a18skfbxybucfwe15epzmfqh67x6s8qpg6mkh1o4rguitpmamqu0e49lpn46zibceyfduzon2h7qqmez48bxiattw1l3ijb62znr5u9asvrklsk2ery7qv8rjlz27qg32rpp18uhxpgomyfz59wioyvhk55g3gd9wlafiem19ksic4y06z8yl1jkge5javu5otosxcjy94qhv9hmjb6qkmsyx6t4tuu900g1v5z3eh3i4gv02jb4m2qdodswhq3148p644pnyidwg3j1dnlwqhz20d3jov18xpmdchg28aq5ryp60uhdjlca5k9mtpps7y7hp9ueqbyaef6uvr860pd9ry58pq9r34xcigrq2nw7ii49hlx3y0xt9tu657oy2rr1yfrihxroxzte23jdj3tvlnlm7jtu4fns4zbodxciah4ktmo8tvggwu8a1p0kbvw4kpfp3il80wg7epyaiii3r9yc9df4b1ff1sg9ygkzb6x4og8j9q5z2ss91f7920l4s07xntihpgoz2yizyalleuv85u30oez14zwdtebgrnkm9laz161vzwrn0svh4gzs9smv6ifnbsmn6b3vm8a5bdecy9aw1ewuwok9a1a3p6s37vj365g5rfl4aqth3ccfgvlvpow7tzioyaqhdsev31yv432gz9wwaogrfo1y7orolxqh046rvv9pk4vtq39vqzv08vnfh633tgtno2yh4u9yiiurui3m9sftffq8dp8yiuse4hv0cn3ouj09kua6etx1746n81n27euk6475zxmglwnxp7xes2kxeqa5vz9rggy540egmzs21wdahj988csll7c0scne6ce0m9ipnzbkbyyg83ov143mkhj3crabkh6vsl9t5e7558j12b6e1ntmtk5to40c9g5lpk6sjdqnoz95rw4od3ic8tw7r8w3drp8w2fgl1fl2jdifeedwse8bo35qlnb9rvjt4nxg077qsmrtme3sfkc8f1o7ibj5a8b8q7ketzgjv5r6yje6t2n85f5xlrnlisqvbhp6yk08t87vaowjqwwh0jw60g1v34zkghmyxlhumfxng5l40hn1zemu898d09i5nyqwt6kmyocozb757e1rb6f735vu8da2u4hjv4yq4iwepy123ww0o5n87luf1z7qg3jdtf83bfsinbatl3t8009hbzv4v0tccyv6a8mycrpnpvqffgmf676s3k0f802gszvuv5pm3o4ud0ymcsibcud9hx7jzhnjb5qjgps28l1a0fhsal1h23jx00ec4crrz6rfsdicdhat3gko4qeehebp0mjqmtw8sunfu7ov5ope14ru9yz3p1dtvoe8lwhpxtk261swgd2bb1q7x98aavvf7wi2sj7hena83f3klsmwcw1srvpsv802myfto1ombqte3ib73zge5u9xzo05djto4z4jozp1p8i62uqm2zqvmxs75m6bo8fpktp2bjjj3s8cno24ye0kxrklacwpqj3abp1qphqxvwc4p5jfokei9e73jwvjxzy7t8lj22d8zzj6gmwlqlar7ii6wlhzv68alxh59tyrttvj99fpqwpe0d6d5fl3ijx3mes9czpml14dj2oc5bpjjvi2gokkfnd01fzlviffbpbkmgu2czmat3dlrprgnn73oijvmfr84d7stdq8o6cap8uo8wdo2bo30fyengzyloxzrffcnn5xtgaj0uyexgkbjeiisukgoysg2wt0ts6ejmiidjp2bw3lwqpw6mimwnjvyk4vcy5lax8w1eqovpjij40hf2115fdjsd6qs79aywe8 00:47:33.146 16:24:37 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:47:33.146 16:24:37 -- dd/basic_rw.sh@59 -- # gen_conf 00:47:33.146 16:24:37 -- dd/common.sh@31 -- # xtrace_disable 00:47:33.146 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:47:33.146 { 00:47:33.146 "subsystems": [ 00:47:33.146 { 00:47:33.146 "subsystem": "bdev", 00:47:33.146 "config": [ 00:47:33.146 { 00:47:33.146 "params": { 00:47:33.146 "trtype": "pcie", 00:47:33.146 "traddr": "0000:00:06.0", 00:47:33.146 "name": "Nvme0" 00:47:33.146 }, 00:47:33.146 "method": "bdev_nvme_attach_controller" 00:47:33.146 }, 00:47:33.146 { 00:47:33.146 "method": "bdev_wait_for_examine" 00:47:33.146 } 00:47:33.146 ] 00:47:33.146 } 00:47:33.146 ] 00:47:33.146 } 00:47:33.146 [2024-07-22 16:24:37.200737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:33.146 [2024-07-22 16:24:37.200974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90482 ] 00:47:33.146 [2024-07-22 16:24:37.381129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:33.404 [2024-07-22 16:24:37.637164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:35.370  Copying: 4096/4096 [B] (average 4000 kBps) 00:47:35.370 00:47:35.370 16:24:39 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:47:35.370 16:24:39 -- dd/basic_rw.sh@65 -- # gen_conf 00:47:35.370 16:24:39 -- dd/common.sh@31 -- # xtrace_disable 00:47:35.370 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:47:35.370 { 00:47:35.370 "subsystems": [ 00:47:35.370 { 00:47:35.370 "subsystem": "bdev", 00:47:35.370 "config": [ 00:47:35.370 { 00:47:35.370 "params": { 00:47:35.370 "trtype": "pcie", 00:47:35.370 "traddr": "0000:00:06.0", 00:47:35.370 "name": "Nvme0" 00:47:35.370 }, 00:47:35.370 "method": "bdev_nvme_attach_controller" 00:47:35.370 }, 00:47:35.370 { 00:47:35.370 "method": "bdev_wait_for_examine" 00:47:35.370 } 00:47:35.370 ] 00:47:35.370 } 00:47:35.370 ] 00:47:35.370 } 00:47:35.370 [2024-07-22 16:24:39.388269] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:35.370 [2024-07-22 16:24:39.388440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90511 ] 00:47:35.370 [2024-07-22 16:24:39.557099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:35.628 [2024-07-22 16:24:39.820730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:37.132  Copying: 4096/4096 [B] (average 4000 kBps) 00:47:37.132 00:47:37.132 16:24:41 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:47:37.133 16:24:41 -- dd/basic_rw.sh@72 -- # [[ jijwwm99z8ld0wj5o420e1se67s9ab4jgprujwqdy08wg5qryphwasx6yzz41m2zi3y1zlnbs2mvkftz8ouxl0cd3256s7iurk9mgjq09yh4rvo048i8cwb9878pa37vg715c2x2uwnpx5nsoksp7aigigytdnh85f04wdg2ikwwop1zoiw7jfwaq50srq0uifikkdlcj1c0f5jmtevqrgw3wjx5j3bfog8d5fx9o1y59dj7817r5adrxbvhqehyvric9i8c2vphhe5nyv4o7u69d7fq6aoupf4ikl4ilf5su4oz43iexwpbs2pdy8pu0db5cyo2rqawbns5v1y2v12ynjwueaj5gyrwu3zas3xhoflr0kb4wy1mepi1t1jz1e7pwivfpvyfz036pgyfq4od5t962poqc0b82lr0zan7sdh87qix07j11lvj6sl6f0pmz3nddgaf1t2zk0pqwl7scoc5alacu7f7zh5y3kobmi594dehem5z7gtmeobnm88gqdcytgqcq59vmbfrm14a3fcm1c8qihu5dph2zp7tpvquf5o6y0v9fy6xgggyrn3vnfkf47t9c3ys5rp3dynqhgq7pm8007tmb84z85tvfur5qfda1v059m8zik4d9j23mbsrb5sl6jbmy05qxpulxx18z7bietspuldc2kqj3mery4ie3xkq8559p9z2h8rqh7y5bu9dg1qfs0ja6nfvz3i0xyw22w3yongn2cbmg2ks7ah6r4msha7fyg6zeek3zdurburw1cdmfqmgl441wayjn54l7665r610whai87dgd17t31q92kuo05n6lrvhivsdjornaov7f1m0fbc3u3kbn6cdv15rmjjbb05fzk72zjlz4c3g7dbi2p5gsx42iwsuqcnz2e6w9xo7f9f6xgk1f5xepmdv3m3ulhdx4zii4nfu1bcldrzdfb4dtfemce9ngxcrj6wpo6obbm4ra8wnk5u3z2uyhuxmef33jqmynthgt7rpbodojr287g1hxan8r58xuvw64fmy9hktfff1bas31ydy8iapmiuv26fadoctcl6dq50qn8gibd9erw7o9e7bgkmik1t3crqk7m3c8dn15ed3d40p1i05ssduaaqd30xrc5e4jzsh9xpe2mkf43bf8ujok37o50bxz8g39d2ihr2z66apnm80ulsw1isxwnaso1royvgo88p0wz5mlngst6sblkjmzptxidufe10cih2raeaxisuprov5vqvegbmz0bgdaynidzsd1dmpd5sc1308o5bdu9j4i5zcvycoi1lvrterydd87q7rh1nbqba27july2px32pxw38qcmw3e0uy7r2x8d629g2cjd2rskww6yypvnyqmas3v9h86pji189iuxajb28zckxz65014rjz6mcwx0ae34rabewgr6d0ri00ngfebmyfb9ewsrkltvbxqskpvq4fii1icm84hn96za32wsxipa3exdjnb6khp21otv6i6dpxfiyh87sbo6semfun11nlbqc9lvu0l29hjfe9mjcfftp0q76lt3obopmva9ylyh3dcjm7tjiwenfeylz0yhribn3lluv0qtzebt5y22xdz7b6lrk9qncqsc4qrc4wyzf0cidooce29w9z5ezlrfalavkrfahgygbhjh40ergifct9y6yx8g0pg4w91nxme5hrvqw9biiox5qj1eidsfvpyb5ehh6kz6ezn8zd0urtus4h9jmhhzv27iwz3maveaadewh7uu5fghinpz3sxp427vs6enrp52lcnmdnp7dg6qjbmna9m27ug7u0nopo9uu7x7rbqb5uyyxlbhby40qkeedo95i02v5dmtt3vubki2yez693qp7nvse7j97c6gfx2ln09jc1t4l3wbt69rjk0os2l65ky2vehhkzyfkyv3ovpp5x7166vkuye4fygdf2li2eczheeq63vhevjh2iygvsz0l8pw84mifdlstk6gw40t3besqzh1fk1mm73kaqtrqbroc4y4ez5l3a6mtoeddonkft3j0hnglh9h73c3x65d3n9pk91rmbet1sdobmwjvqm6dmafnpd3f9tt6701on71gazzly1xe8wje36mrdaxnwufeodq6ae9y7ht6lqzzac86ts7neyk4fy51vg8xp6ctsay4dscmbqrncxa9ajcizj9xsy3mexde2y3qtb3lwkk6c1yeq9jg6cpso530ylti5m0i1svm4urzsli0a18skfbxybucfwe15epzmfqh67x6s8qpg6mkh1o4rguitpmamqu0e49lpn46zibceyfduzon2h7qqmez48bxiattw1l3ijb62znr5u9asvrklsk2ery7qv8rjlz27qg32rpp18uhxpgomyfz59wioyvhk55g3gd9wlafiem19ksic4y06z8yl1jkge5javu5otosxcjy94qhv9hmjb6qkmsyx6t4tuu900g1v5z3eh3i4gv02jb4m2qdodswhq3148p644pnyidwg3j1dnlwqhz20d3jov18xpmdchg28aq5ryp60uhdjlca5k9mtpps7y7hp9ueqbyaef6uvr860pd9ry58pq9r34xcigrq2nw7ii49hlx3y0xt9tu657oy2rr1yfrihxroxzte23jdj3tvlnlm7jtu4fns4zbodxciah4ktmo8tvggwu8a1p0kbvw4kpfp3il80wg7epyaiii3r9yc9df4b1ff1sg9ygkzb6x4og8j9q5z2ss91f7920l4s07xntihpgoz2yizyalleuv85u30oez14zwdtebgrnkm9laz161vzwrn0svh4gzs9smv6ifnbsmn6b3vm8a5bdecy9aw1ewuwok9a1a3p6s37vj365g5rfl4aqth3ccfgvlvpow7tzioyaqhdsev31yv432gz9wwaogrfo1y7orolxqh046rvv9pk4vtq39vqzv08vnfh633tgtno2yh4u9yiiurui3m9sftffq8dp8yiuse4hv0cn3ouj09kua6etx1746n81n27euk6475zxmglwnxp7xes2kxeqa5vz9rggy540egmzs21wdahj988csll7c0scne6ce0m9ipnzbkbyyg83ov143mkhj3crabkh6vsl9t5e7558j12b6e1ntmtk5to40c9g5lpk6sjdqnoz95rw4od3ic8tw7r8w3drp8w2fgl1fl2jdifeedwse8bo35qlnb9rvjt4nxg077qsmrtme3sfkc8f1o7ibj5a8b8q7ketzgjv5r6yje6t2n85f5xlrnlisqvbhp6yk08t87vaowjqwwh0jw60g1v34zkghmyxlhumfxng5l40hn1zemu898d09i5nyqwt6kmyocozb757e1rb6f735vu8da2u4hjv4yq4iwepy123ww0o5n87luf1z7qg3jdtf83bfsinbatl3t8009hbzv4v0tccyv6a8mycrpnpvqffgmf676s3k0f802gszvuv5pm3o4ud0ymcsibcud9hx7jzhnjb5qjgps28l1a0fhsal1h23jx00ec4crrz6rfsdicdhat3gko4qeehebp0mjqmtw8sunfu7ov5ope14ru9yz3p1dtvoe8lwhpxtk261swgd2bb1q7x98aavvf7wi2sj7hena83f3klsmwcw1srvpsv802myfto1ombqte3ib73zge5u9xzo05djto4z4jozp1p8i62uqm2zqvmxs75m6bo8fpktp2bjjj3s8cno24ye0kxrklacwpqj3abp1qphqxvwc4p5jfokei9e73jwvjxzy7t8lj22d8zzj6gmwlqlar7ii6wlhzv68alxh59tyrttvj99fpqwpe0d6d5fl3ijx3mes9czpml14dj2oc5bpjjvi2gokkfnd01fzlviffbpbkmgu2czmat3dlrprgnn73oijvmfr84d7stdq8o6cap8uo8wdo2bo30fyengzyloxzrffcnn5xtgaj0uyexgkbjeiisukgoysg2wt0ts6ejmiidjp2bw3lwqpw6mimwnjvyk4vcy5lax8w1eqovpjij40hf2115fdjsd6qs79aywe8 == \j\i\j\w\w\m\9\9\z\8\l\d\0\w\j\5\o\4\2\0\e\1\s\e\6\7\s\9\a\b\4\j\g\p\r\u\j\w\q\d\y\0\8\w\g\5\q\r\y\p\h\w\a\s\x\6\y\z\z\4\1\m\2\z\i\3\y\1\z\l\n\b\s\2\m\v\k\f\t\z\8\o\u\x\l\0\c\d\3\2\5\6\s\7\i\u\r\k\9\m\g\j\q\0\9\y\h\4\r\v\o\0\4\8\i\8\c\w\b\9\8\7\8\p\a\3\7\v\g\7\1\5\c\2\x\2\u\w\n\p\x\5\n\s\o\k\s\p\7\a\i\g\i\g\y\t\d\n\h\8\5\f\0\4\w\d\g\2\i\k\w\w\o\p\1\z\o\i\w\7\j\f\w\a\q\5\0\s\r\q\0\u\i\f\i\k\k\d\l\c\j\1\c\0\f\5\j\m\t\e\v\q\r\g\w\3\w\j\x\5\j\3\b\f\o\g\8\d\5\f\x\9\o\1\y\5\9\d\j\7\8\1\7\r\5\a\d\r\x\b\v\h\q\e\h\y\v\r\i\c\9\i\8\c\2\v\p\h\h\e\5\n\y\v\4\o\7\u\6\9\d\7\f\q\6\a\o\u\p\f\4\i\k\l\4\i\l\f\5\s\u\4\o\z\4\3\i\e\x\w\p\b\s\2\p\d\y\8\p\u\0\d\b\5\c\y\o\2\r\q\a\w\b\n\s\5\v\1\y\2\v\1\2\y\n\j\w\u\e\a\j\5\g\y\r\w\u\3\z\a\s\3\x\h\o\f\l\r\0\k\b\4\w\y\1\m\e\p\i\1\t\1\j\z\1\e\7\p\w\i\v\f\p\v\y\f\z\0\3\6\p\g\y\f\q\4\o\d\5\t\9\6\2\p\o\q\c\0\b\8\2\l\r\0\z\a\n\7\s\d\h\8\7\q\i\x\0\7\j\1\1\l\v\j\6\s\l\6\f\0\p\m\z\3\n\d\d\g\a\f\1\t\2\z\k\0\p\q\w\l\7\s\c\o\c\5\a\l\a\c\u\7\f\7\z\h\5\y\3\k\o\b\m\i\5\9\4\d\e\h\e\m\5\z\7\g\t\m\e\o\b\n\m\8\8\g\q\d\c\y\t\g\q\c\q\5\9\v\m\b\f\r\m\1\4\a\3\f\c\m\1\c\8\q\i\h\u\5\d\p\h\2\z\p\7\t\p\v\q\u\f\5\o\6\y\0\v\9\f\y\6\x\g\g\g\y\r\n\3\v\n\f\k\f\4\7\t\9\c\3\y\s\5\r\p\3\d\y\n\q\h\g\q\7\p\m\8\0\0\7\t\m\b\8\4\z\8\5\t\v\f\u\r\5\q\f\d\a\1\v\0\5\9\m\8\z\i\k\4\d\9\j\2\3\m\b\s\r\b\5\s\l\6\j\b\m\y\0\5\q\x\p\u\l\x\x\1\8\z\7\b\i\e\t\s\p\u\l\d\c\2\k\q\j\3\m\e\r\y\4\i\e\3\x\k\q\8\5\5\9\p\9\z\2\h\8\r\q\h\7\y\5\b\u\9\d\g\1\q\f\s\0\j\a\6\n\f\v\z\3\i\0\x\y\w\2\2\w\3\y\o\n\g\n\2\c\b\m\g\2\k\s\7\a\h\6\r\4\m\s\h\a\7\f\y\g\6\z\e\e\k\3\z\d\u\r\b\u\r\w\1\c\d\m\f\q\m\g\l\4\4\1\w\a\y\j\n\5\4\l\7\6\6\5\r\6\1\0\w\h\a\i\8\7\d\g\d\1\7\t\3\1\q\9\2\k\u\o\0\5\n\6\l\r\v\h\i\v\s\d\j\o\r\n\a\o\v\7\f\1\m\0\f\b\c\3\u\3\k\b\n\6\c\d\v\1\5\r\m\j\j\b\b\0\5\f\z\k\7\2\z\j\l\z\4\c\3\g\7\d\b\i\2\p\5\g\s\x\4\2\i\w\s\u\q\c\n\z\2\e\6\w\9\x\o\7\f\9\f\6\x\g\k\1\f\5\x\e\p\m\d\v\3\m\3\u\l\h\d\x\4\z\i\i\4\n\f\u\1\b\c\l\d\r\z\d\f\b\4\d\t\f\e\m\c\e\9\n\g\x\c\r\j\6\w\p\o\6\o\b\b\m\4\r\a\8\w\n\k\5\u\3\z\2\u\y\h\u\x\m\e\f\3\3\j\q\m\y\n\t\h\g\t\7\r\p\b\o\d\o\j\r\2\8\7\g\1\h\x\a\n\8\r\5\8\x\u\v\w\6\4\f\m\y\9\h\k\t\f\f\f\1\b\a\s\3\1\y\d\y\8\i\a\p\m\i\u\v\2\6\f\a\d\o\c\t\c\l\6\d\q\5\0\q\n\8\g\i\b\d\9\e\r\w\7\o\9\e\7\b\g\k\m\i\k\1\t\3\c\r\q\k\7\m\3\c\8\d\n\1\5\e\d\3\d\4\0\p\1\i\0\5\s\s\d\u\a\a\q\d\3\0\x\r\c\5\e\4\j\z\s\h\9\x\p\e\2\m\k\f\4\3\b\f\8\u\j\o\k\3\7\o\5\0\b\x\z\8\g\3\9\d\2\i\h\r\2\z\6\6\a\p\n\m\8\0\u\l\s\w\1\i\s\x\w\n\a\s\o\1\r\o\y\v\g\o\8\8\p\0\w\z\5\m\l\n\g\s\t\6\s\b\l\k\j\m\z\p\t\x\i\d\u\f\e\1\0\c\i\h\2\r\a\e\a\x\i\s\u\p\r\o\v\5\v\q\v\e\g\b\m\z\0\b\g\d\a\y\n\i\d\z\s\d\1\d\m\p\d\5\s\c\1\3\0\8\o\5\b\d\u\9\j\4\i\5\z\c\v\y\c\o\i\1\l\v\r\t\e\r\y\d\d\8\7\q\7\r\h\1\n\b\q\b\a\2\7\j\u\l\y\2\p\x\3\2\p\x\w\3\8\q\c\m\w\3\e\0\u\y\7\r\2\x\8\d\6\2\9\g\2\c\j\d\2\r\s\k\w\w\6\y\y\p\v\n\y\q\m\a\s\3\v\9\h\8\6\p\j\i\1\8\9\i\u\x\a\j\b\2\8\z\c\k\x\z\6\5\0\1\4\r\j\z\6\m\c\w\x\0\a\e\3\4\r\a\b\e\w\g\r\6\d\0\r\i\0\0\n\g\f\e\b\m\y\f\b\9\e\w\s\r\k\l\t\v\b\x\q\s\k\p\v\q\4\f\i\i\1\i\c\m\8\4\h\n\9\6\z\a\3\2\w\s\x\i\p\a\3\e\x\d\j\n\b\6\k\h\p\2\1\o\t\v\6\i\6\d\p\x\f\i\y\h\8\7\s\b\o\6\s\e\m\f\u\n\1\1\n\l\b\q\c\9\l\v\u\0\l\2\9\h\j\f\e\9\m\j\c\f\f\t\p\0\q\7\6\l\t\3\o\b\o\p\m\v\a\9\y\l\y\h\3\d\c\j\m\7\t\j\i\w\e\n\f\e\y\l\z\0\y\h\r\i\b\n\3\l\l\u\v\0\q\t\z\e\b\t\5\y\2\2\x\d\z\7\b\6\l\r\k\9\q\n\c\q\s\c\4\q\r\c\4\w\y\z\f\0\c\i\d\o\o\c\e\2\9\w\9\z\5\e\z\l\r\f\a\l\a\v\k\r\f\a\h\g\y\g\b\h\j\h\4\0\e\r\g\i\f\c\t\9\y\6\y\x\8\g\0\p\g\4\w\9\1\n\x\m\e\5\h\r\v\q\w\9\b\i\i\o\x\5\q\j\1\e\i\d\s\f\v\p\y\b\5\e\h\h\6\k\z\6\e\z\n\8\z\d\0\u\r\t\u\s\4\h\9\j\m\h\h\z\v\2\7\i\w\z\3\m\a\v\e\a\a\d\e\w\h\7\u\u\5\f\g\h\i\n\p\z\3\s\x\p\4\2\7\v\s\6\e\n\r\p\5\2\l\c\n\m\d\n\p\7\d\g\6\q\j\b\m\n\a\9\m\2\7\u\g\7\u\0\n\o\p\o\9\u\u\7\x\7\r\b\q\b\5\u\y\y\x\l\b\h\b\y\4\0\q\k\e\e\d\o\9\5\i\0\2\v\5\d\m\t\t\3\v\u\b\k\i\2\y\e\z\6\9\3\q\p\7\n\v\s\e\7\j\9\7\c\6\g\f\x\2\l\n\0\9\j\c\1\t\4\l\3\w\b\t\6\9\r\j\k\0\o\s\2\l\6\5\k\y\2\v\e\h\h\k\z\y\f\k\y\v\3\o\v\p\p\5\x\7\1\6\6\v\k\u\y\e\4\f\y\g\d\f\2\l\i\2\e\c\z\h\e\e\q\6\3\v\h\e\v\j\h\2\i\y\g\v\s\z\0\l\8\p\w\8\4\m\i\f\d\l\s\t\k\6\g\w\4\0\t\3\b\e\s\q\z\h\1\f\k\1\m\m\7\3\k\a\q\t\r\q\b\r\o\c\4\y\4\e\z\5\l\3\a\6\m\t\o\e\d\d\o\n\k\f\t\3\j\0\h\n\g\l\h\9\h\7\3\c\3\x\6\5\d\3\n\9\p\k\9\1\r\m\b\e\t\1\s\d\o\b\m\w\j\v\q\m\6\d\m\a\f\n\p\d\3\f\9\t\t\6\7\0\1\o\n\7\1\g\a\z\z\l\y\1\x\e\8\w\j\e\3\6\m\r\d\a\x\n\w\u\f\e\o\d\q\6\a\e\9\y\7\h\t\6\l\q\z\z\a\c\8\6\t\s\7\n\e\y\k\4\f\y\5\1\v\g\8\x\p\6\c\t\s\a\y\4\d\s\c\m\b\q\r\n\c\x\a\9\a\j\c\i\z\j\9\x\s\y\3\m\e\x\d\e\2\y\3\q\t\b\3\l\w\k\k\6\c\1\y\e\q\9\j\g\6\c\p\s\o\5\3\0\y\l\t\i\5\m\0\i\1\s\v\m\4\u\r\z\s\l\i\0\a\1\8\s\k\f\b\x\y\b\u\c\f\w\e\1\5\e\p\z\m\f\q\h\6\7\x\6\s\8\q\p\g\6\m\k\h\1\o\4\r\g\u\i\t\p\m\a\m\q\u\0\e\4\9\l\p\n\4\6\z\i\b\c\e\y\f\d\u\z\o\n\2\h\7\q\q\m\e\z\4\8\b\x\i\a\t\t\w\1\l\3\i\j\b\6\2\z\n\r\5\u\9\a\s\v\r\k\l\s\k\2\e\r\y\7\q\v\8\r\j\l\z\2\7\q\g\3\2\r\p\p\1\8\u\h\x\p\g\o\m\y\f\z\5\9\w\i\o\y\v\h\k\5\5\g\3\g\d\9\w\l\a\f\i\e\m\1\9\k\s\i\c\4\y\0\6\z\8\y\l\1\j\k\g\e\5\j\a\v\u\5\o\t\o\s\x\c\j\y\9\4\q\h\v\9\h\m\j\b\6\q\k\m\s\y\x\6\t\4\t\u\u\9\0\0\g\1\v\5\z\3\e\h\3\i\4\g\v\0\2\j\b\4\m\2\q\d\o\d\s\w\h\q\3\1\4\8\p\6\4\4\p\n\y\i\d\w\g\3\j\1\d\n\l\w\q\h\z\2\0\d\3\j\o\v\1\8\x\p\m\d\c\h\g\2\8\a\q\5\r\y\p\6\0\u\h\d\j\l\c\a\5\k\9\m\t\p\p\s\7\y\7\h\p\9\u\e\q\b\y\a\e\f\6\u\v\r\8\6\0\p\d\9\r\y\5\8\p\q\9\r\3\4\x\c\i\g\r\q\2\n\w\7\i\i\4\9\h\l\x\3\y\0\x\t\9\t\u\6\5\7\o\y\2\r\r\1\y\f\r\i\h\x\r\o\x\z\t\e\2\3\j\d\j\3\t\v\l\n\l\m\7\j\t\u\4\f\n\s\4\z\b\o\d\x\c\i\a\h\4\k\t\m\o\8\t\v\g\g\w\u\8\a\1\p\0\k\b\v\w\4\k\p\f\p\3\i\l\8\0\w\g\7\e\p\y\a\i\i\i\3\r\9\y\c\9\d\f\4\b\1\f\f\1\s\g\9\y\g\k\z\b\6\x\4\o\g\8\j\9\q\5\z\2\s\s\9\1\f\7\9\2\0\l\4\s\0\7\x\n\t\i\h\p\g\o\z\2\y\i\z\y\a\l\l\e\u\v\8\5\u\3\0\o\e\z\1\4\z\w\d\t\e\b\g\r\n\k\m\9\l\a\z\1\6\1\v\z\w\r\n\0\s\v\h\4\g\z\s\9\s\m\v\6\i\f\n\b\s\m\n\6\b\3\v\m\8\a\5\b\d\e\c\y\9\a\w\1\e\w\u\w\o\k\9\a\1\a\3\p\6\s\3\7\v\j\3\6\5\g\5\r\f\l\4\a\q\t\h\3\c\c\f\g\v\l\v\p\o\w\7\t\z\i\o\y\a\q\h\d\s\e\v\3\1\y\v\4\3\2\g\z\9\w\w\a\o\g\r\f\o\1\y\7\o\r\o\l\x\q\h\0\4\6\r\v\v\9\p\k\4\v\t\q\3\9\v\q\z\v\0\8\v\n\f\h\6\3\3\t\g\t\n\o\2\y\h\4\u\9\y\i\i\u\r\u\i\3\m\9\s\f\t\f\f\q\8\d\p\8\y\i\u\s\e\4\h\v\0\c\n\3\o\u\j\0\9\k\u\a\6\e\t\x\1\7\4\6\n\8\1\n\2\7\e\u\k\6\4\7\5\z\x\m\g\l\w\n\x\p\7\x\e\s\2\k\x\e\q\a\5\v\z\9\r\g\g\y\5\4\0\e\g\m\z\s\2\1\w\d\a\h\j\9\8\8\c\s\l\l\7\c\0\s\c\n\e\6\c\e\0\m\9\i\p\n\z\b\k\b\y\y\g\8\3\o\v\1\4\3\m\k\h\j\3\c\r\a\b\k\h\6\v\s\l\9\t\5\e\7\5\5\8\j\1\2\b\6\e\1\n\t\m\t\k\5\t\o\4\0\c\9\g\5\l\p\k\6\s\j\d\q\n\o\z\9\5\r\w\4\o\d\3\i\c\8\t\w\7\r\8\w\3\d\r\p\8\w\2\f\g\l\1\f\l\2\j\d\i\f\e\e\d\w\s\e\8\b\o\3\5\q\l\n\b\9\r\v\j\t\4\n\x\g\0\7\7\q\s\m\r\t\m\e\3\s\f\k\c\8\f\1\o\7\i\b\j\5\a\8\b\8\q\7\k\e\t\z\g\j\v\5\r\6\y\j\e\6\t\2\n\8\5\f\5\x\l\r\n\l\i\s\q\v\b\h\p\6\y\k\0\8\t\8\7\v\a\o\w\j\q\w\w\h\0\j\w\6\0\g\1\v\3\4\z\k\g\h\m\y\x\l\h\u\m\f\x\n\g\5\l\4\0\h\n\1\z\e\m\u\8\9\8\d\0\9\i\5\n\y\q\w\t\6\k\m\y\o\c\o\z\b\7\5\7\e\1\r\b\6\f\7\3\5\v\u\8\d\a\2\u\4\h\j\v\4\y\q\4\i\w\e\p\y\1\2\3\w\w\0\o\5\n\8\7\l\u\f\1\z\7\q\g\3\j\d\t\f\8\3\b\f\s\i\n\b\a\t\l\3\t\8\0\0\9\h\b\z\v\4\v\0\t\c\c\y\v\6\a\8\m\y\c\r\p\n\p\v\q\f\f\g\m\f\6\7\6\s\3\k\0\f\8\0\2\g\s\z\v\u\v\5\p\m\3\o\4\u\d\0\y\m\c\s\i\b\c\u\d\9\h\x\7\j\z\h\n\j\b\5\q\j\g\p\s\2\8\l\1\a\0\f\h\s\a\l\1\h\2\3\j\x\0\0\e\c\4\c\r\r\z\6\r\f\s\d\i\c\d\h\a\t\3\g\k\o\4\q\e\e\h\e\b\p\0\m\j\q\m\t\w\8\s\u\n\f\u\7\o\v\5\o\p\e\1\4\r\u\9\y\z\3\p\1\d\t\v\o\e\8\l\w\h\p\x\t\k\2\6\1\s\w\g\d\2\b\b\1\q\7\x\9\8\a\a\v\v\f\7\w\i\2\s\j\7\h\e\n\a\8\3\f\3\k\l\s\m\w\c\w\1\s\r\v\p\s\v\8\0\2\m\y\f\t\o\1\o\m\b\q\t\e\3\i\b\7\3\z\g\e\5\u\9\x\z\o\0\5\d\j\t\o\4\z\4\j\o\z\p\1\p\8\i\6\2\u\q\m\2\z\q\v\m\x\s\7\5\m\6\b\o\8\f\p\k\t\p\2\b\j\j\j\3\s\8\c\n\o\2\4\y\e\0\k\x\r\k\l\a\c\w\p\q\j\3\a\b\p\1\q\p\h\q\x\v\w\c\4\p\5\j\f\o\k\e\i\9\e\7\3\j\w\v\j\x\z\y\7\t\8\l\j\2\2\d\8\z\z\j\6\g\m\w\l\q\l\a\r\7\i\i\6\w\l\h\z\v\6\8\a\l\x\h\5\9\t\y\r\t\t\v\j\9\9\f\p\q\w\p\e\0\d\6\d\5\f\l\3\i\j\x\3\m\e\s\9\c\z\p\m\l\1\4\d\j\2\o\c\5\b\p\j\j\v\i\2\g\o\k\k\f\n\d\0\1\f\z\l\v\i\f\f\b\p\b\k\m\g\u\2\c\z\m\a\t\3\d\l\r\p\r\g\n\n\7\3\o\i\j\v\m\f\r\8\4\d\7\s\t\d\q\8\o\6\c\a\p\8\u\o\8\w\d\o\2\b\o\3\0\f\y\e\n\g\z\y\l\o\x\z\r\f\f\c\n\n\5\x\t\g\a\j\0\u\y\e\x\g\k\b\j\e\i\i\s\u\k\g\o\y\s\g\2\w\t\0\t\s\6\e\j\m\i\i\d\j\p\2\b\w\3\l\w\q\p\w\6\m\i\m\w\n\j\v\y\k\4\v\c\y\5\l\a\x\8\w\1\e\q\o\v\p\j\i\j\4\0\h\f\2\1\1\5\f\d\j\s\d\6\q\s\7\9\a\y\w\e\8 ]] 00:47:37.133 ************************************ 00:47:37.133 END TEST dd_rw_offset 00:47:37.133 ************************************ 00:47:37.133 00:47:37.133 real 0m4.311s 00:47:37.133 user 0m3.461s 00:47:37.133 sys 0m0.671s 00:47:37.133 16:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:37.133 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:47:37.398 16:24:41 -- dd/basic_rw.sh@1 -- # cleanup 00:47:37.398 16:24:41 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:47:37.398 16:24:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:47:37.398 16:24:41 -- dd/common.sh@11 -- # local nvme_ref= 00:47:37.398 16:24:41 -- dd/common.sh@12 -- # local size=0xffff 00:47:37.398 16:24:41 -- dd/common.sh@14 -- # local bs=1048576 00:47:37.398 16:24:41 -- dd/common.sh@15 -- # local count=1 00:47:37.398 16:24:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:47:37.398 16:24:41 -- dd/common.sh@18 -- # gen_conf 00:47:37.398 16:24:41 -- dd/common.sh@31 -- # xtrace_disable 00:47:37.398 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:47:37.398 { 00:47:37.398 "subsystems": [ 00:47:37.398 { 00:47:37.398 "subsystem": "bdev", 00:47:37.398 "config": [ 00:47:37.398 { 00:47:37.398 "params": { 00:47:37.398 "trtype": "pcie", 00:47:37.398 "traddr": "0000:00:06.0", 00:47:37.398 "name": "Nvme0" 00:47:37.398 }, 00:47:37.398 "method": "bdev_nvme_attach_controller" 00:47:37.398 }, 00:47:37.398 { 00:47:37.398 "method": "bdev_wait_for_examine" 00:47:37.398 } 00:47:37.398 ] 00:47:37.398 } 00:47:37.398 ] 00:47:37.398 } 00:47:37.398 [2024-07-22 16:24:41.506142] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:37.398 [2024-07-22 16:24:41.506344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90552 ] 00:47:37.657 [2024-07-22 16:24:41.679135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:37.915 [2024-07-22 16:24:41.981852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.561  Copying: 1024/1024 [kB] (average 1000 MBps) 00:47:39.561 00:47:39.561 16:24:43 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:39.561 ************************************ 00:47:39.561 END TEST spdk_dd_basic_rw 00:47:39.561 ************************************ 00:47:39.561 00:47:39.561 real 0m50.649s 00:47:39.561 user 0m40.755s 00:47:39.561 sys 0m7.767s 00:47:39.561 16:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:39.561 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:47:39.561 16:24:43 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:47:39.561 16:24:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:39.561 16:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:39.561 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:47:39.561 ************************************ 00:47:39.561 START TEST spdk_dd_posix 00:47:39.561 ************************************ 00:47:39.561 16:24:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:47:39.561 * Looking for test storage... 00:47:39.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:47:39.561 16:24:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:39.561 16:24:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:39.561 16:24:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:39.561 16:24:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:39.561 16:24:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:39.561 16:24:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:39.561 16:24:43 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:39.561 16:24:43 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:39.561 16:24:43 -- paths/export.sh@6 -- # export PATH 00:47:39.561 16:24:43 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:47:39.561 16:24:43 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:47:39.561 16:24:43 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:47:39.561 16:24:43 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:47:39.561 16:24:43 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:47:39.561 16:24:43 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:39.561 16:24:43 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:39.561 16:24:43 -- dd/posix.sh@130 -- # tests 00:47:39.561 16:24:43 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:47:39.561 * First test run, liburing in use 00:47:39.561 16:24:43 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:47:39.561 16:24:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:39.561 16:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:39.561 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:47:39.561 ************************************ 00:47:39.561 START TEST dd_flag_append 00:47:39.561 ************************************ 00:47:39.561 16:24:43 -- common/autotest_common.sh@1104 -- # append 00:47:39.561 16:24:43 -- dd/posix.sh@16 -- # local dump0 00:47:39.561 16:24:43 -- dd/posix.sh@17 -- # local dump1 00:47:39.561 16:24:43 -- dd/posix.sh@19 -- # gen_bytes 32 00:47:39.561 16:24:43 -- dd/common.sh@98 -- # xtrace_disable 00:47:39.561 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:47:39.561 16:24:43 -- dd/posix.sh@19 -- # dump0=90qgcho6qo652qao5utt0s6sgohyav9m 00:47:39.561 16:24:43 -- dd/posix.sh@20 -- # gen_bytes 32 00:47:39.561 16:24:43 -- dd/common.sh@98 -- # xtrace_disable 00:47:39.561 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:47:39.561 16:24:43 -- dd/posix.sh@20 -- # dump1=otfbp4dzbx8ip2nm8y0809pf9xwzcmdk 00:47:39.561 16:24:43 -- dd/posix.sh@22 -- # printf %s 90qgcho6qo652qao5utt0s6sgohyav9m 00:47:39.561 16:24:43 -- dd/posix.sh@23 -- # printf %s otfbp4dzbx8ip2nm8y0809pf9xwzcmdk 00:47:39.561 16:24:43 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:47:39.819 [2024-07-22 16:24:43.887763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:39.819 [2024-07-22 16:24:43.887908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90632 ] 00:47:39.819 [2024-07-22 16:24:44.053061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:40.078 [2024-07-22 16:24:44.311834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:42.023  Copying: 32/32 [B] (average 31 kBps) 00:47:42.023 00:47:42.023 16:24:45 -- dd/posix.sh@27 -- # [[ otfbp4dzbx8ip2nm8y0809pf9xwzcmdk90qgcho6qo652qao5utt0s6sgohyav9m == \o\t\f\b\p\4\d\z\b\x\8\i\p\2\n\m\8\y\0\8\0\9\p\f\9\x\w\z\c\m\d\k\9\0\q\g\c\h\o\6\q\o\6\5\2\q\a\o\5\u\t\t\0\s\6\s\g\o\h\y\a\v\9\m ]] 00:47:42.023 00:47:42.023 real 0m2.076s 00:47:42.023 ************************************ 00:47:42.023 END TEST dd_flag_append 00:47:42.023 ************************************ 00:47:42.023 user 0m1.683s 00:47:42.023 sys 0m0.280s 00:47:42.023 16:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:42.023 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:47:42.023 16:24:45 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:47:42.023 16:24:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:42.023 16:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:42.023 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:47:42.023 ************************************ 00:47:42.023 START TEST dd_flag_directory 00:47:42.023 ************************************ 00:47:42.023 16:24:45 -- common/autotest_common.sh@1104 -- # directory 00:47:42.023 16:24:45 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:42.023 16:24:45 -- common/autotest_common.sh@640 -- # local es=0 00:47:42.023 16:24:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:42.023 16:24:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:42.023 16:24:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:42.023 16:24:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:42.023 16:24:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:42.023 16:24:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:42.023 16:24:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:42.023 16:24:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:42.023 16:24:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:42.023 16:24:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:42.023 [2024-07-22 16:24:46.017383] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:42.023 [2024-07-22 16:24:46.017617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90675 ] 00:47:42.023 [2024-07-22 16:24:46.198246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:42.281 [2024-07-22 16:24:46.453580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:42.538 [2024-07-22 16:24:46.787090] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:47:42.538 [2024-07-22 16:24:46.787173] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:47:42.538 [2024-07-22 16:24:46.787198] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:43.472 [2024-07-22 16:24:47.540658] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:47:43.730 16:24:47 -- common/autotest_common.sh@643 -- # es=236 00:47:43.730 16:24:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:43.730 16:24:47 -- common/autotest_common.sh@652 -- # es=108 00:47:43.730 16:24:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:47:43.730 16:24:47 -- common/autotest_common.sh@660 -- # es=1 00:47:43.730 16:24:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:43.730 16:24:47 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:47:43.730 16:24:47 -- common/autotest_common.sh@640 -- # local es=0 00:47:43.730 16:24:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:47:43.730 16:24:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:43.730 16:24:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:43.730 16:24:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:43.730 16:24:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:43.730 16:24:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:43.730 16:24:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:43.730 16:24:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:43.730 16:24:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:43.730 16:24:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:47:44.002 [2024-07-22 16:24:48.050461] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:44.002 [2024-07-22 16:24:48.050632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90698 ] 00:47:44.003 [2024-07-22 16:24:48.219944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:44.261 [2024-07-22 16:24:48.468380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:44.825 [2024-07-22 16:24:48.793602] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:47:44.825 [2024-07-22 16:24:48.793689] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:47:44.825 [2024-07-22 16:24:48.793718] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:45.390 [2024-07-22 16:24:49.556046] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:47:46.003 16:24:50 -- common/autotest_common.sh@643 -- # es=236 00:47:46.003 16:24:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:46.003 16:24:50 -- common/autotest_common.sh@652 -- # es=108 00:47:46.003 16:24:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:47:46.003 16:24:50 -- common/autotest_common.sh@660 -- # es=1 00:47:46.003 16:24:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:46.003 00:47:46.003 real 0m4.062s 00:47:46.003 user 0m3.255s 00:47:46.003 sys 0m0.606s 00:47:46.003 16:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:46.003 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:47:46.003 ************************************ 00:47:46.003 END TEST dd_flag_directory 00:47:46.003 ************************************ 00:47:46.003 16:24:50 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:47:46.003 16:24:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:46.003 16:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:46.003 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:47:46.003 ************************************ 00:47:46.003 START TEST dd_flag_nofollow 00:47:46.003 ************************************ 00:47:46.003 16:24:50 -- common/autotest_common.sh@1104 -- # nofollow 00:47:46.003 16:24:50 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:47:46.003 16:24:50 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:47:46.003 16:24:50 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:47:46.003 16:24:50 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:47:46.003 16:24:50 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:46.003 16:24:50 -- common/autotest_common.sh@640 -- # local es=0 00:47:46.003 16:24:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:46.003 16:24:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:46.003 16:24:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:46.004 16:24:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:46.004 16:24:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:46.004 16:24:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:46.004 16:24:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:46.004 16:24:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:46.004 16:24:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:46.004 16:24:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:46.004 [2024-07-22 16:24:50.131253] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:46.004 [2024-07-22 16:24:50.131419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90744 ] 00:47:46.261 [2024-07-22 16:24:50.300237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:46.529 [2024-07-22 16:24:50.548018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:46.788 [2024-07-22 16:24:50.864979] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:47:46.788 [2024-07-22 16:24:50.865060] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:47:46.788 [2024-07-22 16:24:50.865084] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:47.352 [2024-07-22 16:24:51.613301] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:47:47.916 16:24:52 -- common/autotest_common.sh@643 -- # es=216 00:47:47.917 16:24:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:47.917 16:24:52 -- common/autotest_common.sh@652 -- # es=88 00:47:47.917 16:24:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:47:47.917 16:24:52 -- common/autotest_common.sh@660 -- # es=1 00:47:47.917 16:24:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:47.917 16:24:52 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:47:47.917 16:24:52 -- common/autotest_common.sh@640 -- # local es=0 00:47:47.917 16:24:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:47:47.917 16:24:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:47.917 16:24:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:47.917 16:24:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:47.917 16:24:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:47.917 16:24:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:47.917 16:24:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:47:47.917 16:24:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:47:47.917 16:24:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:47:47.917 16:24:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:47:47.917 [2024-07-22 16:24:52.117403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:47.917 [2024-07-22 16:24:52.117571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90766 ] 00:47:48.174 [2024-07-22 16:24:52.283817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:48.432 [2024-07-22 16:24:52.533012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:48.690 [2024-07-22 16:24:52.868117] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:47:48.690 [2024-07-22 16:24:52.868192] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:47:48.690 [2024-07-22 16:24:52.868217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:47:49.622 [2024-07-22 16:24:53.655868] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:47:49.882 16:24:54 -- common/autotest_common.sh@643 -- # es=216 00:47:49.882 16:24:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:47:49.882 16:24:54 -- common/autotest_common.sh@652 -- # es=88 00:47:49.882 16:24:54 -- common/autotest_common.sh@653 -- # case "$es" in 00:47:49.882 16:24:54 -- common/autotest_common.sh@660 -- # es=1 00:47:49.882 16:24:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:47:49.882 16:24:54 -- dd/posix.sh@46 -- # gen_bytes 512 00:47:49.882 16:24:54 -- dd/common.sh@98 -- # xtrace_disable 00:47:49.882 16:24:54 -- common/autotest_common.sh@10 -- # set +x 00:47:49.882 16:24:54 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:50.140 [2024-07-22 16:24:54.188208] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:50.140 [2024-07-22 16:24:54.188453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90791 ] 00:47:50.140 [2024-07-22 16:24:54.364470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:50.398 [2024-07-22 16:24:54.606920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.338  Copying: 512/512 [B] (average 500 kBps) 00:47:52.338 00:47:52.338 16:24:56 -- dd/posix.sh@49 -- # [[ dbughsql5sy22lu71sgekufx8odi24ez7xlsw8ij8du7etgno8kaqdh1j8p0jnqzywozmrsokr5cnu8lvb99uku1mp14ata777q3x7tc1md0nrbhlewmgy57743n99uqhod90zmhc34gj4ca2ef0lnjj7zk35nxsc49nllmpwp1cl4mjke8mujkuu9yef68vfcv1wdw79ykfw8d3sabtgybapkwa3fhm1dxkj20jj4esvwn3cnebzqkcuve03wmm57m9lja009ttbqjijmucbionnvmqw1v8jsanugsdf8vvxlf7akj2q5uthb3lucc5qtjmsj18kfwbzmp9yz42m49wbo58mov9kdgurgpke6xxfxym8fdqc1xhnsba0ovy9viwng744zdwwrbfd3wahw0kicm87jgcg6k3s8z4y57ygtuuw7p1bialyccngxloq1g5q64qeaqto0fw12p1gwm7n7cxic033p2veb8zn5kxyp4vexox4t8w5cgtr4w6 == \d\b\u\g\h\s\q\l\5\s\y\2\2\l\u\7\1\s\g\e\k\u\f\x\8\o\d\i\2\4\e\z\7\x\l\s\w\8\i\j\8\d\u\7\e\t\g\n\o\8\k\a\q\d\h\1\j\8\p\0\j\n\q\z\y\w\o\z\m\r\s\o\k\r\5\c\n\u\8\l\v\b\9\9\u\k\u\1\m\p\1\4\a\t\a\7\7\7\q\3\x\7\t\c\1\m\d\0\n\r\b\h\l\e\w\m\g\y\5\7\7\4\3\n\9\9\u\q\h\o\d\9\0\z\m\h\c\3\4\g\j\4\c\a\2\e\f\0\l\n\j\j\7\z\k\3\5\n\x\s\c\4\9\n\l\l\m\p\w\p\1\c\l\4\m\j\k\e\8\m\u\j\k\u\u\9\y\e\f\6\8\v\f\c\v\1\w\d\w\7\9\y\k\f\w\8\d\3\s\a\b\t\g\y\b\a\p\k\w\a\3\f\h\m\1\d\x\k\j\2\0\j\j\4\e\s\v\w\n\3\c\n\e\b\z\q\k\c\u\v\e\0\3\w\m\m\5\7\m\9\l\j\a\0\0\9\t\t\b\q\j\i\j\m\u\c\b\i\o\n\n\v\m\q\w\1\v\8\j\s\a\n\u\g\s\d\f\8\v\v\x\l\f\7\a\k\j\2\q\5\u\t\h\b\3\l\u\c\c\5\q\t\j\m\s\j\1\8\k\f\w\b\z\m\p\9\y\z\4\2\m\4\9\w\b\o\5\8\m\o\v\9\k\d\g\u\r\g\p\k\e\6\x\x\f\x\y\m\8\f\d\q\c\1\x\h\n\s\b\a\0\o\v\y\9\v\i\w\n\g\7\4\4\z\d\w\w\r\b\f\d\3\w\a\h\w\0\k\i\c\m\8\7\j\g\c\g\6\k\3\s\8\z\4\y\5\7\y\g\t\u\u\w\7\p\1\b\i\a\l\y\c\c\n\g\x\l\o\q\1\g\5\q\6\4\q\e\a\q\t\o\0\f\w\1\2\p\1\g\w\m\7\n\7\c\x\i\c\0\3\3\p\2\v\e\b\8\z\n\5\k\x\y\p\4\v\e\x\o\x\4\t\8\w\5\c\g\t\r\4\w\6 ]] 00:47:52.338 00:47:52.338 real 0m6.175s 00:47:52.338 user 0m4.906s 00:47:52.338 sys 0m0.957s 00:47:52.338 16:24:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:52.338 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:47:52.338 ************************************ 00:47:52.338 END TEST dd_flag_nofollow 00:47:52.338 ************************************ 00:47:52.338 16:24:56 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:47:52.338 16:24:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:52.338 16:24:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:52.338 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:47:52.338 ************************************ 00:47:52.338 START TEST dd_flag_noatime 00:47:52.338 ************************************ 00:47:52.338 16:24:56 -- common/autotest_common.sh@1104 -- # noatime 00:47:52.338 16:24:56 -- dd/posix.sh@53 -- # local atime_if 00:47:52.338 16:24:56 -- dd/posix.sh@54 -- # local atime_of 00:47:52.338 16:24:56 -- dd/posix.sh@58 -- # gen_bytes 512 00:47:52.338 16:24:56 -- dd/common.sh@98 -- # xtrace_disable 00:47:52.338 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:47:52.338 16:24:56 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:52.338 16:24:56 -- dd/posix.sh@60 -- # atime_if=1721665494 00:47:52.338 16:24:56 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:52.338 16:24:56 -- dd/posix.sh@61 -- # atime_of=1721665496 00:47:52.338 16:24:56 -- dd/posix.sh@66 -- # sleep 1 00:47:53.272 16:24:57 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:53.272 [2024-07-22 16:24:57.369469] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:53.272 [2024-07-22 16:24:57.369634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90844 ] 00:47:53.272 [2024-07-22 16:24:57.539571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:53.838 [2024-07-22 16:24:57.835964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:55.472  Copying: 512/512 [B] (average 500 kBps) 00:47:55.472 00:47:55.472 16:24:59 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:55.472 16:24:59 -- dd/posix.sh@69 -- # (( atime_if == 1721665494 )) 00:47:55.472 16:24:59 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:55.472 16:24:59 -- dd/posix.sh@70 -- # (( atime_of == 1721665496 )) 00:47:55.472 16:24:59 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:47:55.472 [2024-07-22 16:24:59.527868] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:55.472 [2024-07-22 16:24:59.528069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90873 ] 00:47:55.473 [2024-07-22 16:24:59.705408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:55.732 [2024-07-22 16:24:59.962069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:57.737  Copying: 512/512 [B] (average 500 kBps) 00:47:57.737 00:47:57.737 16:25:01 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:47:57.737 16:25:01 -- dd/posix.sh@73 -- # (( atime_if < 1721665500 )) 00:47:57.737 00:47:57.737 real 0m5.377s 00:47:57.737 user 0m3.494s 00:47:57.737 sys 0m0.657s 00:47:57.737 16:25:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:57.737 16:25:01 -- common/autotest_common.sh@10 -- # set +x 00:47:57.737 ************************************ 00:47:57.737 END TEST dd_flag_noatime 00:47:57.737 ************************************ 00:47:57.737 16:25:01 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:47:57.737 16:25:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:47:57.737 16:25:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:47:57.737 16:25:01 -- common/autotest_common.sh@10 -- # set +x 00:47:57.737 ************************************ 00:47:57.737 START TEST dd_flags_misc 00:47:57.737 ************************************ 00:47:57.737 16:25:01 -- common/autotest_common.sh@1104 -- # io 00:47:57.737 16:25:01 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:47:57.737 16:25:01 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:47:57.737 16:25:01 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:47:57.737 16:25:01 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:47:57.737 16:25:01 -- dd/posix.sh@86 -- # gen_bytes 512 00:47:57.737 16:25:01 -- dd/common.sh@98 -- # xtrace_disable 00:47:57.737 16:25:01 -- common/autotest_common.sh@10 -- # set +x 00:47:57.737 16:25:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:47:57.738 16:25:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:47:57.738 [2024-07-22 16:25:01.789336] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:57.738 [2024-07-22 16:25:01.789542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90918 ] 00:47:57.738 [2024-07-22 16:25:01.969990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:57.997 [2024-07-22 16:25:02.234797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:59.946  Copying: 512/512 [B] (average 500 kBps) 00:47:59.946 00:47:59.946 16:25:03 -- dd/posix.sh@93 -- # [[ gi8pt96vm8hgac0a6iwsegu4lkrihy4569bvgs86byee36hm2lm9xfb30njht4piwtli8aea1dwednm8avik1d8htiukr6a12tceool4739xykazguv5w1khd0yjbc0fqpnl9msnenhbusgyug8cekglsjdrvxvpu8y61aooi5gh56gugkrst9zat7voxqdxyzn0qrswj848fzxjg7p1cdkbw67uj8ro0rr59mnjxnfnkfs911mqqw3zviw4u8hv26mk2panp3d77cjo3xm43gsdx623s8u653zhngntre8kxk9zchz0akirmiougxfwecaoy1t6jaru99wdh56w8jtkfpkt1zdxrobbjn9rgx4a8pr00p0xjx2ddtpayibsj0rjfttpswaj2yoabg7ptrkjkzkc7ffns32ibh9218qq17v6r0dp1matc1ziqpzdoza5wwz90x2x25pq08wwkutc7lnmoive0gdi5ztf6q1yeo7037rm6olueu0s9vog == \g\i\8\p\t\9\6\v\m\8\h\g\a\c\0\a\6\i\w\s\e\g\u\4\l\k\r\i\h\y\4\5\6\9\b\v\g\s\8\6\b\y\e\e\3\6\h\m\2\l\m\9\x\f\b\3\0\n\j\h\t\4\p\i\w\t\l\i\8\a\e\a\1\d\w\e\d\n\m\8\a\v\i\k\1\d\8\h\t\i\u\k\r\6\a\1\2\t\c\e\o\o\l\4\7\3\9\x\y\k\a\z\g\u\v\5\w\1\k\h\d\0\y\j\b\c\0\f\q\p\n\l\9\m\s\n\e\n\h\b\u\s\g\y\u\g\8\c\e\k\g\l\s\j\d\r\v\x\v\p\u\8\y\6\1\a\o\o\i\5\g\h\5\6\g\u\g\k\r\s\t\9\z\a\t\7\v\o\x\q\d\x\y\z\n\0\q\r\s\w\j\8\4\8\f\z\x\j\g\7\p\1\c\d\k\b\w\6\7\u\j\8\r\o\0\r\r\5\9\m\n\j\x\n\f\n\k\f\s\9\1\1\m\q\q\w\3\z\v\i\w\4\u\8\h\v\2\6\m\k\2\p\a\n\p\3\d\7\7\c\j\o\3\x\m\4\3\g\s\d\x\6\2\3\s\8\u\6\5\3\z\h\n\g\n\t\r\e\8\k\x\k\9\z\c\h\z\0\a\k\i\r\m\i\o\u\g\x\f\w\e\c\a\o\y\1\t\6\j\a\r\u\9\9\w\d\h\5\6\w\8\j\t\k\f\p\k\t\1\z\d\x\r\o\b\b\j\n\9\r\g\x\4\a\8\p\r\0\0\p\0\x\j\x\2\d\d\t\p\a\y\i\b\s\j\0\r\j\f\t\t\p\s\w\a\j\2\y\o\a\b\g\7\p\t\r\k\j\k\z\k\c\7\f\f\n\s\3\2\i\b\h\9\2\1\8\q\q\1\7\v\6\r\0\d\p\1\m\a\t\c\1\z\i\q\p\z\d\o\z\a\5\w\w\z\9\0\x\2\x\2\5\p\q\0\8\w\w\k\u\t\c\7\l\n\m\o\i\v\e\0\g\d\i\5\z\t\f\6\q\1\y\e\o\7\0\3\7\r\m\6\o\l\u\e\u\0\s\9\v\o\g ]] 00:47:59.946 16:25:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:47:59.946 16:25:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:47:59.946 [2024-07-22 16:25:03.997546] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:47:59.946 [2024-07-22 16:25:03.997701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90939 ] 00:47:59.946 [2024-07-22 16:25:04.176704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:00.204 [2024-07-22 16:25:04.438705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:02.145  Copying: 512/512 [B] (average 500 kBps) 00:48:02.145 00:48:02.145 16:25:06 -- dd/posix.sh@93 -- # [[ gi8pt96vm8hgac0a6iwsegu4lkrihy4569bvgs86byee36hm2lm9xfb30njht4piwtli8aea1dwednm8avik1d8htiukr6a12tceool4739xykazguv5w1khd0yjbc0fqpnl9msnenhbusgyug8cekglsjdrvxvpu8y61aooi5gh56gugkrst9zat7voxqdxyzn0qrswj848fzxjg7p1cdkbw67uj8ro0rr59mnjxnfnkfs911mqqw3zviw4u8hv26mk2panp3d77cjo3xm43gsdx623s8u653zhngntre8kxk9zchz0akirmiougxfwecaoy1t6jaru99wdh56w8jtkfpkt1zdxrobbjn9rgx4a8pr00p0xjx2ddtpayibsj0rjfttpswaj2yoabg7ptrkjkzkc7ffns32ibh9218qq17v6r0dp1matc1ziqpzdoza5wwz90x2x25pq08wwkutc7lnmoive0gdi5ztf6q1yeo7037rm6olueu0s9vog == \g\i\8\p\t\9\6\v\m\8\h\g\a\c\0\a\6\i\w\s\e\g\u\4\l\k\r\i\h\y\4\5\6\9\b\v\g\s\8\6\b\y\e\e\3\6\h\m\2\l\m\9\x\f\b\3\0\n\j\h\t\4\p\i\w\t\l\i\8\a\e\a\1\d\w\e\d\n\m\8\a\v\i\k\1\d\8\h\t\i\u\k\r\6\a\1\2\t\c\e\o\o\l\4\7\3\9\x\y\k\a\z\g\u\v\5\w\1\k\h\d\0\y\j\b\c\0\f\q\p\n\l\9\m\s\n\e\n\h\b\u\s\g\y\u\g\8\c\e\k\g\l\s\j\d\r\v\x\v\p\u\8\y\6\1\a\o\o\i\5\g\h\5\6\g\u\g\k\r\s\t\9\z\a\t\7\v\o\x\q\d\x\y\z\n\0\q\r\s\w\j\8\4\8\f\z\x\j\g\7\p\1\c\d\k\b\w\6\7\u\j\8\r\o\0\r\r\5\9\m\n\j\x\n\f\n\k\f\s\9\1\1\m\q\q\w\3\z\v\i\w\4\u\8\h\v\2\6\m\k\2\p\a\n\p\3\d\7\7\c\j\o\3\x\m\4\3\g\s\d\x\6\2\3\s\8\u\6\5\3\z\h\n\g\n\t\r\e\8\k\x\k\9\z\c\h\z\0\a\k\i\r\m\i\o\u\g\x\f\w\e\c\a\o\y\1\t\6\j\a\r\u\9\9\w\d\h\5\6\w\8\j\t\k\f\p\k\t\1\z\d\x\r\o\b\b\j\n\9\r\g\x\4\a\8\p\r\0\0\p\0\x\j\x\2\d\d\t\p\a\y\i\b\s\j\0\r\j\f\t\t\p\s\w\a\j\2\y\o\a\b\g\7\p\t\r\k\j\k\z\k\c\7\f\f\n\s\3\2\i\b\h\9\2\1\8\q\q\1\7\v\6\r\0\d\p\1\m\a\t\c\1\z\i\q\p\z\d\o\z\a\5\w\w\z\9\0\x\2\x\2\5\p\q\0\8\w\w\k\u\t\c\7\l\n\m\o\i\v\e\0\g\d\i\5\z\t\f\6\q\1\y\e\o\7\0\3\7\r\m\6\o\l\u\e\u\0\s\9\v\o\g ]] 00:48:02.145 16:25:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:02.145 16:25:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:48:02.145 [2024-07-22 16:25:06.113944] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:02.145 [2024-07-22 16:25:06.114142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90964 ] 00:48:02.145 [2024-07-22 16:25:06.281277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:02.404 [2024-07-22 16:25:06.549019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:04.039  Copying: 512/512 [B] (average 125 kBps) 00:48:04.039 00:48:04.039 16:25:08 -- dd/posix.sh@93 -- # [[ gi8pt96vm8hgac0a6iwsegu4lkrihy4569bvgs86byee36hm2lm9xfb30njht4piwtli8aea1dwednm8avik1d8htiukr6a12tceool4739xykazguv5w1khd0yjbc0fqpnl9msnenhbusgyug8cekglsjdrvxvpu8y61aooi5gh56gugkrst9zat7voxqdxyzn0qrswj848fzxjg7p1cdkbw67uj8ro0rr59mnjxnfnkfs911mqqw3zviw4u8hv26mk2panp3d77cjo3xm43gsdx623s8u653zhngntre8kxk9zchz0akirmiougxfwecaoy1t6jaru99wdh56w8jtkfpkt1zdxrobbjn9rgx4a8pr00p0xjx2ddtpayibsj0rjfttpswaj2yoabg7ptrkjkzkc7ffns32ibh9218qq17v6r0dp1matc1ziqpzdoza5wwz90x2x25pq08wwkutc7lnmoive0gdi5ztf6q1yeo7037rm6olueu0s9vog == \g\i\8\p\t\9\6\v\m\8\h\g\a\c\0\a\6\i\w\s\e\g\u\4\l\k\r\i\h\y\4\5\6\9\b\v\g\s\8\6\b\y\e\e\3\6\h\m\2\l\m\9\x\f\b\3\0\n\j\h\t\4\p\i\w\t\l\i\8\a\e\a\1\d\w\e\d\n\m\8\a\v\i\k\1\d\8\h\t\i\u\k\r\6\a\1\2\t\c\e\o\o\l\4\7\3\9\x\y\k\a\z\g\u\v\5\w\1\k\h\d\0\y\j\b\c\0\f\q\p\n\l\9\m\s\n\e\n\h\b\u\s\g\y\u\g\8\c\e\k\g\l\s\j\d\r\v\x\v\p\u\8\y\6\1\a\o\o\i\5\g\h\5\6\g\u\g\k\r\s\t\9\z\a\t\7\v\o\x\q\d\x\y\z\n\0\q\r\s\w\j\8\4\8\f\z\x\j\g\7\p\1\c\d\k\b\w\6\7\u\j\8\r\o\0\r\r\5\9\m\n\j\x\n\f\n\k\f\s\9\1\1\m\q\q\w\3\z\v\i\w\4\u\8\h\v\2\6\m\k\2\p\a\n\p\3\d\7\7\c\j\o\3\x\m\4\3\g\s\d\x\6\2\3\s\8\u\6\5\3\z\h\n\g\n\t\r\e\8\k\x\k\9\z\c\h\z\0\a\k\i\r\m\i\o\u\g\x\f\w\e\c\a\o\y\1\t\6\j\a\r\u\9\9\w\d\h\5\6\w\8\j\t\k\f\p\k\t\1\z\d\x\r\o\b\b\j\n\9\r\g\x\4\a\8\p\r\0\0\p\0\x\j\x\2\d\d\t\p\a\y\i\b\s\j\0\r\j\f\t\t\p\s\w\a\j\2\y\o\a\b\g\7\p\t\r\k\j\k\z\k\c\7\f\f\n\s\3\2\i\b\h\9\2\1\8\q\q\1\7\v\6\r\0\d\p\1\m\a\t\c\1\z\i\q\p\z\d\o\z\a\5\w\w\z\9\0\x\2\x\2\5\p\q\0\8\w\w\k\u\t\c\7\l\n\m\o\i\v\e\0\g\d\i\5\z\t\f\6\q\1\y\e\o\7\0\3\7\r\m\6\o\l\u\e\u\0\s\9\v\o\g ]] 00:48:04.039 16:25:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:04.039 16:25:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:48:04.039 [2024-07-22 16:25:08.243496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:04.039 [2024-07-22 16:25:08.243652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90984 ] 00:48:04.298 [2024-07-22 16:25:08.410048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:04.556 [2024-07-22 16:25:08.667779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:06.190  Copying: 512/512 [B] (average 125 kBps) 00:48:06.190 00:48:06.190 16:25:10 -- dd/posix.sh@93 -- # [[ gi8pt96vm8hgac0a6iwsegu4lkrihy4569bvgs86byee36hm2lm9xfb30njht4piwtli8aea1dwednm8avik1d8htiukr6a12tceool4739xykazguv5w1khd0yjbc0fqpnl9msnenhbusgyug8cekglsjdrvxvpu8y61aooi5gh56gugkrst9zat7voxqdxyzn0qrswj848fzxjg7p1cdkbw67uj8ro0rr59mnjxnfnkfs911mqqw3zviw4u8hv26mk2panp3d77cjo3xm43gsdx623s8u653zhngntre8kxk9zchz0akirmiougxfwecaoy1t6jaru99wdh56w8jtkfpkt1zdxrobbjn9rgx4a8pr00p0xjx2ddtpayibsj0rjfttpswaj2yoabg7ptrkjkzkc7ffns32ibh9218qq17v6r0dp1matc1ziqpzdoza5wwz90x2x25pq08wwkutc7lnmoive0gdi5ztf6q1yeo7037rm6olueu0s9vog == \g\i\8\p\t\9\6\v\m\8\h\g\a\c\0\a\6\i\w\s\e\g\u\4\l\k\r\i\h\y\4\5\6\9\b\v\g\s\8\6\b\y\e\e\3\6\h\m\2\l\m\9\x\f\b\3\0\n\j\h\t\4\p\i\w\t\l\i\8\a\e\a\1\d\w\e\d\n\m\8\a\v\i\k\1\d\8\h\t\i\u\k\r\6\a\1\2\t\c\e\o\o\l\4\7\3\9\x\y\k\a\z\g\u\v\5\w\1\k\h\d\0\y\j\b\c\0\f\q\p\n\l\9\m\s\n\e\n\h\b\u\s\g\y\u\g\8\c\e\k\g\l\s\j\d\r\v\x\v\p\u\8\y\6\1\a\o\o\i\5\g\h\5\6\g\u\g\k\r\s\t\9\z\a\t\7\v\o\x\q\d\x\y\z\n\0\q\r\s\w\j\8\4\8\f\z\x\j\g\7\p\1\c\d\k\b\w\6\7\u\j\8\r\o\0\r\r\5\9\m\n\j\x\n\f\n\k\f\s\9\1\1\m\q\q\w\3\z\v\i\w\4\u\8\h\v\2\6\m\k\2\p\a\n\p\3\d\7\7\c\j\o\3\x\m\4\3\g\s\d\x\6\2\3\s\8\u\6\5\3\z\h\n\g\n\t\r\e\8\k\x\k\9\z\c\h\z\0\a\k\i\r\m\i\o\u\g\x\f\w\e\c\a\o\y\1\t\6\j\a\r\u\9\9\w\d\h\5\6\w\8\j\t\k\f\p\k\t\1\z\d\x\r\o\b\b\j\n\9\r\g\x\4\a\8\p\r\0\0\p\0\x\j\x\2\d\d\t\p\a\y\i\b\s\j\0\r\j\f\t\t\p\s\w\a\j\2\y\o\a\b\g\7\p\t\r\k\j\k\z\k\c\7\f\f\n\s\3\2\i\b\h\9\2\1\8\q\q\1\7\v\6\r\0\d\p\1\m\a\t\c\1\z\i\q\p\z\d\o\z\a\5\w\w\z\9\0\x\2\x\2\5\p\q\0\8\w\w\k\u\t\c\7\l\n\m\o\i\v\e\0\g\d\i\5\z\t\f\6\q\1\y\e\o\7\0\3\7\r\m\6\o\l\u\e\u\0\s\9\v\o\g ]] 00:48:06.190 16:25:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:48:06.190 16:25:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:48:06.190 16:25:10 -- dd/common.sh@98 -- # xtrace_disable 00:48:06.190 16:25:10 -- common/autotest_common.sh@10 -- # set +x 00:48:06.190 16:25:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:06.190 16:25:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:48:06.190 [2024-07-22 16:25:10.383020] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:06.190 [2024-07-22 16:25:10.383212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91009 ] 00:48:06.474 [2024-07-22 16:25:10.566373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:06.732 [2024-07-22 16:25:10.833825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:08.367  Copying: 512/512 [B] (average 500 kBps) 00:48:08.367 00:48:08.367 16:25:12 -- dd/posix.sh@93 -- # [[ 6o45dan60mzqpepzu787fjqt0uoutpjhli7hpgv2b8miw75ykl13up93sl8wxt4yjgko07ho562xdmjzrmotyakk00qs5yu6l2p1v2ypj01g5yr2el9lqa7e8ogvphwgvt5ca0camel22bhoqewqwe13p60somipa04nao1n5g5nh7nvk9yqer0t7i7f349s7gknl7p27brjjukjmi2j7njqw610egn2h92ygaab8kqjzgulzvo75frtj5tz1lkb03vnt7exqvwfw2le61uky8mgzzeyfqe9frgqk30noazqa4ucbi2rpv3nuhlw7gcw7ayk19tiph54t9v65cw0zodjbuzw6aw2s32c58c08yyk3stj6x3wqmfnwvzdaz6kbtpjgs52rcudubhfsg3ols6axekhow9ue7adorwtb5bii64x0ie4vglmezmfn34lwhl4nsr9t5dt8sksx8h4fg5o4i9mdr3klh2fubw4lhdb1rpdwgl28h3gva0yzjzt == \6\o\4\5\d\a\n\6\0\m\z\q\p\e\p\z\u\7\8\7\f\j\q\t\0\u\o\u\t\p\j\h\l\i\7\h\p\g\v\2\b\8\m\i\w\7\5\y\k\l\1\3\u\p\9\3\s\l\8\w\x\t\4\y\j\g\k\o\0\7\h\o\5\6\2\x\d\m\j\z\r\m\o\t\y\a\k\k\0\0\q\s\5\y\u\6\l\2\p\1\v\2\y\p\j\0\1\g\5\y\r\2\e\l\9\l\q\a\7\e\8\o\g\v\p\h\w\g\v\t\5\c\a\0\c\a\m\e\l\2\2\b\h\o\q\e\w\q\w\e\1\3\p\6\0\s\o\m\i\p\a\0\4\n\a\o\1\n\5\g\5\n\h\7\n\v\k\9\y\q\e\r\0\t\7\i\7\f\3\4\9\s\7\g\k\n\l\7\p\2\7\b\r\j\j\u\k\j\m\i\2\j\7\n\j\q\w\6\1\0\e\g\n\2\h\9\2\y\g\a\a\b\8\k\q\j\z\g\u\l\z\v\o\7\5\f\r\t\j\5\t\z\1\l\k\b\0\3\v\n\t\7\e\x\q\v\w\f\w\2\l\e\6\1\u\k\y\8\m\g\z\z\e\y\f\q\e\9\f\r\g\q\k\3\0\n\o\a\z\q\a\4\u\c\b\i\2\r\p\v\3\n\u\h\l\w\7\g\c\w\7\a\y\k\1\9\t\i\p\h\5\4\t\9\v\6\5\c\w\0\z\o\d\j\b\u\z\w\6\a\w\2\s\3\2\c\5\8\c\0\8\y\y\k\3\s\t\j\6\x\3\w\q\m\f\n\w\v\z\d\a\z\6\k\b\t\p\j\g\s\5\2\r\c\u\d\u\b\h\f\s\g\3\o\l\s\6\a\x\e\k\h\o\w\9\u\e\7\a\d\o\r\w\t\b\5\b\i\i\6\4\x\0\i\e\4\v\g\l\m\e\z\m\f\n\3\4\l\w\h\l\4\n\s\r\9\t\5\d\t\8\s\k\s\x\8\h\4\f\g\5\o\4\i\9\m\d\r\3\k\l\h\2\f\u\b\w\4\l\h\d\b\1\r\p\d\w\g\l\2\8\h\3\g\v\a\0\y\z\j\z\t ]] 00:48:08.367 16:25:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:08.367 16:25:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:48:08.367 [2024-07-22 16:25:12.532328] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:08.367 [2024-07-22 16:25:12.532539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91039 ] 00:48:08.625 [2024-07-22 16:25:12.708551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:08.892 [2024-07-22 16:25:12.975075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:10.537  Copying: 512/512 [B] (average 500 kBps) 00:48:10.537 00:48:10.537 16:25:14 -- dd/posix.sh@93 -- # [[ 6o45dan60mzqpepzu787fjqt0uoutpjhli7hpgv2b8miw75ykl13up93sl8wxt4yjgko07ho562xdmjzrmotyakk00qs5yu6l2p1v2ypj01g5yr2el9lqa7e8ogvphwgvt5ca0camel22bhoqewqwe13p60somipa04nao1n5g5nh7nvk9yqer0t7i7f349s7gknl7p27brjjukjmi2j7njqw610egn2h92ygaab8kqjzgulzvo75frtj5tz1lkb03vnt7exqvwfw2le61uky8mgzzeyfqe9frgqk30noazqa4ucbi2rpv3nuhlw7gcw7ayk19tiph54t9v65cw0zodjbuzw6aw2s32c58c08yyk3stj6x3wqmfnwvzdaz6kbtpjgs52rcudubhfsg3ols6axekhow9ue7adorwtb5bii64x0ie4vglmezmfn34lwhl4nsr9t5dt8sksx8h4fg5o4i9mdr3klh2fubw4lhdb1rpdwgl28h3gva0yzjzt == \6\o\4\5\d\a\n\6\0\m\z\q\p\e\p\z\u\7\8\7\f\j\q\t\0\u\o\u\t\p\j\h\l\i\7\h\p\g\v\2\b\8\m\i\w\7\5\y\k\l\1\3\u\p\9\3\s\l\8\w\x\t\4\y\j\g\k\o\0\7\h\o\5\6\2\x\d\m\j\z\r\m\o\t\y\a\k\k\0\0\q\s\5\y\u\6\l\2\p\1\v\2\y\p\j\0\1\g\5\y\r\2\e\l\9\l\q\a\7\e\8\o\g\v\p\h\w\g\v\t\5\c\a\0\c\a\m\e\l\2\2\b\h\o\q\e\w\q\w\e\1\3\p\6\0\s\o\m\i\p\a\0\4\n\a\o\1\n\5\g\5\n\h\7\n\v\k\9\y\q\e\r\0\t\7\i\7\f\3\4\9\s\7\g\k\n\l\7\p\2\7\b\r\j\j\u\k\j\m\i\2\j\7\n\j\q\w\6\1\0\e\g\n\2\h\9\2\y\g\a\a\b\8\k\q\j\z\g\u\l\z\v\o\7\5\f\r\t\j\5\t\z\1\l\k\b\0\3\v\n\t\7\e\x\q\v\w\f\w\2\l\e\6\1\u\k\y\8\m\g\z\z\e\y\f\q\e\9\f\r\g\q\k\3\0\n\o\a\z\q\a\4\u\c\b\i\2\r\p\v\3\n\u\h\l\w\7\g\c\w\7\a\y\k\1\9\t\i\p\h\5\4\t\9\v\6\5\c\w\0\z\o\d\j\b\u\z\w\6\a\w\2\s\3\2\c\5\8\c\0\8\y\y\k\3\s\t\j\6\x\3\w\q\m\f\n\w\v\z\d\a\z\6\k\b\t\p\j\g\s\5\2\r\c\u\d\u\b\h\f\s\g\3\o\l\s\6\a\x\e\k\h\o\w\9\u\e\7\a\d\o\r\w\t\b\5\b\i\i\6\4\x\0\i\e\4\v\g\l\m\e\z\m\f\n\3\4\l\w\h\l\4\n\s\r\9\t\5\d\t\8\s\k\s\x\8\h\4\f\g\5\o\4\i\9\m\d\r\3\k\l\h\2\f\u\b\w\4\l\h\d\b\1\r\p\d\w\g\l\2\8\h\3\g\v\a\0\y\z\j\z\t ]] 00:48:10.537 16:25:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:10.537 16:25:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:48:10.537 [2024-07-22 16:25:14.675735] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:10.537 [2024-07-22 16:25:14.675910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91060 ] 00:48:10.795 [2024-07-22 16:25:14.850710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:11.054 [2024-07-22 16:25:15.114822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:12.687  Copying: 512/512 [B] (average 100 kBps) 00:48:12.687 00:48:12.687 16:25:16 -- dd/posix.sh@93 -- # [[ 6o45dan60mzqpepzu787fjqt0uoutpjhli7hpgv2b8miw75ykl13up93sl8wxt4yjgko07ho562xdmjzrmotyakk00qs5yu6l2p1v2ypj01g5yr2el9lqa7e8ogvphwgvt5ca0camel22bhoqewqwe13p60somipa04nao1n5g5nh7nvk9yqer0t7i7f349s7gknl7p27brjjukjmi2j7njqw610egn2h92ygaab8kqjzgulzvo75frtj5tz1lkb03vnt7exqvwfw2le61uky8mgzzeyfqe9frgqk30noazqa4ucbi2rpv3nuhlw7gcw7ayk19tiph54t9v65cw0zodjbuzw6aw2s32c58c08yyk3stj6x3wqmfnwvzdaz6kbtpjgs52rcudubhfsg3ols6axekhow9ue7adorwtb5bii64x0ie4vglmezmfn34lwhl4nsr9t5dt8sksx8h4fg5o4i9mdr3klh2fubw4lhdb1rpdwgl28h3gva0yzjzt == \6\o\4\5\d\a\n\6\0\m\z\q\p\e\p\z\u\7\8\7\f\j\q\t\0\u\o\u\t\p\j\h\l\i\7\h\p\g\v\2\b\8\m\i\w\7\5\y\k\l\1\3\u\p\9\3\s\l\8\w\x\t\4\y\j\g\k\o\0\7\h\o\5\6\2\x\d\m\j\z\r\m\o\t\y\a\k\k\0\0\q\s\5\y\u\6\l\2\p\1\v\2\y\p\j\0\1\g\5\y\r\2\e\l\9\l\q\a\7\e\8\o\g\v\p\h\w\g\v\t\5\c\a\0\c\a\m\e\l\2\2\b\h\o\q\e\w\q\w\e\1\3\p\6\0\s\o\m\i\p\a\0\4\n\a\o\1\n\5\g\5\n\h\7\n\v\k\9\y\q\e\r\0\t\7\i\7\f\3\4\9\s\7\g\k\n\l\7\p\2\7\b\r\j\j\u\k\j\m\i\2\j\7\n\j\q\w\6\1\0\e\g\n\2\h\9\2\y\g\a\a\b\8\k\q\j\z\g\u\l\z\v\o\7\5\f\r\t\j\5\t\z\1\l\k\b\0\3\v\n\t\7\e\x\q\v\w\f\w\2\l\e\6\1\u\k\y\8\m\g\z\z\e\y\f\q\e\9\f\r\g\q\k\3\0\n\o\a\z\q\a\4\u\c\b\i\2\r\p\v\3\n\u\h\l\w\7\g\c\w\7\a\y\k\1\9\t\i\p\h\5\4\t\9\v\6\5\c\w\0\z\o\d\j\b\u\z\w\6\a\w\2\s\3\2\c\5\8\c\0\8\y\y\k\3\s\t\j\6\x\3\w\q\m\f\n\w\v\z\d\a\z\6\k\b\t\p\j\g\s\5\2\r\c\u\d\u\b\h\f\s\g\3\o\l\s\6\a\x\e\k\h\o\w\9\u\e\7\a\d\o\r\w\t\b\5\b\i\i\6\4\x\0\i\e\4\v\g\l\m\e\z\m\f\n\3\4\l\w\h\l\4\n\s\r\9\t\5\d\t\8\s\k\s\x\8\h\4\f\g\5\o\4\i\9\m\d\r\3\k\l\h\2\f\u\b\w\4\l\h\d\b\1\r\p\d\w\g\l\2\8\h\3\g\v\a\0\y\z\j\z\t ]] 00:48:12.687 16:25:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:12.687 16:25:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:48:12.688 [2024-07-22 16:25:16.764665] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:12.688 [2024-07-22 16:25:16.764848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91085 ] 00:48:12.688 [2024-07-22 16:25:16.930982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:12.946 [2024-07-22 16:25:17.190688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:14.618  Copying: 512/512 [B] (average 125 kBps) 00:48:14.618 00:48:14.618 16:25:18 -- dd/posix.sh@93 -- # [[ 6o45dan60mzqpepzu787fjqt0uoutpjhli7hpgv2b8miw75ykl13up93sl8wxt4yjgko07ho562xdmjzrmotyakk00qs5yu6l2p1v2ypj01g5yr2el9lqa7e8ogvphwgvt5ca0camel22bhoqewqwe13p60somipa04nao1n5g5nh7nvk9yqer0t7i7f349s7gknl7p27brjjukjmi2j7njqw610egn2h92ygaab8kqjzgulzvo75frtj5tz1lkb03vnt7exqvwfw2le61uky8mgzzeyfqe9frgqk30noazqa4ucbi2rpv3nuhlw7gcw7ayk19tiph54t9v65cw0zodjbuzw6aw2s32c58c08yyk3stj6x3wqmfnwvzdaz6kbtpjgs52rcudubhfsg3ols6axekhow9ue7adorwtb5bii64x0ie4vglmezmfn34lwhl4nsr9t5dt8sksx8h4fg5o4i9mdr3klh2fubw4lhdb1rpdwgl28h3gva0yzjzt == \6\o\4\5\d\a\n\6\0\m\z\q\p\e\p\z\u\7\8\7\f\j\q\t\0\u\o\u\t\p\j\h\l\i\7\h\p\g\v\2\b\8\m\i\w\7\5\y\k\l\1\3\u\p\9\3\s\l\8\w\x\t\4\y\j\g\k\o\0\7\h\o\5\6\2\x\d\m\j\z\r\m\o\t\y\a\k\k\0\0\q\s\5\y\u\6\l\2\p\1\v\2\y\p\j\0\1\g\5\y\r\2\e\l\9\l\q\a\7\e\8\o\g\v\p\h\w\g\v\t\5\c\a\0\c\a\m\e\l\2\2\b\h\o\q\e\w\q\w\e\1\3\p\6\0\s\o\m\i\p\a\0\4\n\a\o\1\n\5\g\5\n\h\7\n\v\k\9\y\q\e\r\0\t\7\i\7\f\3\4\9\s\7\g\k\n\l\7\p\2\7\b\r\j\j\u\k\j\m\i\2\j\7\n\j\q\w\6\1\0\e\g\n\2\h\9\2\y\g\a\a\b\8\k\q\j\z\g\u\l\z\v\o\7\5\f\r\t\j\5\t\z\1\l\k\b\0\3\v\n\t\7\e\x\q\v\w\f\w\2\l\e\6\1\u\k\y\8\m\g\z\z\e\y\f\q\e\9\f\r\g\q\k\3\0\n\o\a\z\q\a\4\u\c\b\i\2\r\p\v\3\n\u\h\l\w\7\g\c\w\7\a\y\k\1\9\t\i\p\h\5\4\t\9\v\6\5\c\w\0\z\o\d\j\b\u\z\w\6\a\w\2\s\3\2\c\5\8\c\0\8\y\y\k\3\s\t\j\6\x\3\w\q\m\f\n\w\v\z\d\a\z\6\k\b\t\p\j\g\s\5\2\r\c\u\d\u\b\h\f\s\g\3\o\l\s\6\a\x\e\k\h\o\w\9\u\e\7\a\d\o\r\w\t\b\5\b\i\i\6\4\x\0\i\e\4\v\g\l\m\e\z\m\f\n\3\4\l\w\h\l\4\n\s\r\9\t\5\d\t\8\s\k\s\x\8\h\4\f\g\5\o\4\i\9\m\d\r\3\k\l\h\2\f\u\b\w\4\l\h\d\b\1\r\p\d\w\g\l\2\8\h\3\g\v\a\0\y\z\j\z\t ]] 00:48:14.618 00:48:14.618 real 0m17.073s 00:48:14.618 user 0m13.634s 00:48:14.618 sys 0m2.509s 00:48:14.618 16:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:14.618 ************************************ 00:48:14.618 END TEST dd_flags_misc 00:48:14.618 ************************************ 00:48:14.618 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:48:14.618 16:25:18 -- dd/posix.sh@131 -- # tests_forced_aio 00:48:14.618 16:25:18 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:48:14.618 * Second test run, disabling liburing, forcing AIO 00:48:14.618 16:25:18 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:48:14.618 16:25:18 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:48:14.618 16:25:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:14.618 16:25:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:14.618 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:48:14.618 ************************************ 00:48:14.618 START TEST dd_flag_append_forced_aio 00:48:14.618 ************************************ 00:48:14.618 16:25:18 -- common/autotest_common.sh@1104 -- # append 00:48:14.618 16:25:18 -- dd/posix.sh@16 -- # local dump0 00:48:14.618 16:25:18 -- dd/posix.sh@17 -- # local dump1 00:48:14.618 16:25:18 -- dd/posix.sh@19 -- # gen_bytes 32 00:48:14.618 16:25:18 -- dd/common.sh@98 -- # xtrace_disable 00:48:14.618 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:48:14.618 16:25:18 -- dd/posix.sh@19 -- # dump0=z1moexsncb25xbq74qorw56cr1c4oxan 00:48:14.618 16:25:18 -- dd/posix.sh@20 -- # gen_bytes 32 00:48:14.618 16:25:18 -- dd/common.sh@98 -- # xtrace_disable 00:48:14.618 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:48:14.618 16:25:18 -- dd/posix.sh@20 -- # dump1=kgctrr7cfesoa6gso6u3fyctsrprdpmu 00:48:14.618 16:25:18 -- dd/posix.sh@22 -- # printf %s z1moexsncb25xbq74qorw56cr1c4oxan 00:48:14.618 16:25:18 -- dd/posix.sh@23 -- # printf %s kgctrr7cfesoa6gso6u3fyctsrprdpmu 00:48:14.618 16:25:18 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:48:14.876 [2024-07-22 16:25:18.905724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:14.876 [2024-07-22 16:25:18.905888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91124 ] 00:48:14.876 [2024-07-22 16:25:19.078805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:15.134 [2024-07-22 16:25:19.359638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:17.092  Copying: 32/32 [B] (average 31 kBps) 00:48:17.092 00:48:17.092 16:25:20 -- dd/posix.sh@27 -- # [[ kgctrr7cfesoa6gso6u3fyctsrprdpmuz1moexsncb25xbq74qorw56cr1c4oxan == \k\g\c\t\r\r\7\c\f\e\s\o\a\6\g\s\o\6\u\3\f\y\c\t\s\r\p\r\d\p\m\u\z\1\m\o\e\x\s\n\c\b\2\5\x\b\q\7\4\q\o\r\w\5\6\c\r\1\c\4\o\x\a\n ]] 00:48:17.092 00:48:17.092 real 0m2.143s 00:48:17.092 user 0m1.722s 00:48:17.092 sys 0m0.309s 00:48:17.092 ************************************ 00:48:17.092 END TEST dd_flag_append_forced_aio 00:48:17.092 ************************************ 00:48:17.093 16:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:17.093 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:48:17.093 16:25:21 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:48:17.093 16:25:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:17.093 16:25:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:17.093 16:25:21 -- common/autotest_common.sh@10 -- # set +x 00:48:17.093 ************************************ 00:48:17.093 START TEST dd_flag_directory_forced_aio 00:48:17.093 ************************************ 00:48:17.093 16:25:21 -- common/autotest_common.sh@1104 -- # directory 00:48:17.093 16:25:21 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:17.093 16:25:21 -- common/autotest_common.sh@640 -- # local es=0 00:48:17.093 16:25:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:17.093 16:25:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:17.093 16:25:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:17.093 16:25:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:17.093 16:25:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:17.093 16:25:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:17.093 16:25:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:17.093 16:25:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:17.093 16:25:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:48:17.093 16:25:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:17.093 [2024-07-22 16:25:21.100464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:17.093 [2024-07-22 16:25:21.100634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91168 ] 00:48:17.093 [2024-07-22 16:25:21.274411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:17.352 [2024-07-22 16:25:21.600056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:17.919 [2024-07-22 16:25:21.943569] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:48:17.919 [2024-07-22 16:25:21.943644] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:48:17.919 [2024-07-22 16:25:21.943668] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:18.485 [2024-07-22 16:25:22.756025] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:48:19.052 16:25:23 -- common/autotest_common.sh@643 -- # es=236 00:48:19.052 16:25:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:19.052 16:25:23 -- common/autotest_common.sh@652 -- # es=108 00:48:19.052 16:25:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:48:19.052 16:25:23 -- common/autotest_common.sh@660 -- # es=1 00:48:19.052 16:25:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:19.052 16:25:23 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:48:19.052 16:25:23 -- common/autotest_common.sh@640 -- # local es=0 00:48:19.053 16:25:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:48:19.053 16:25:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:19.053 16:25:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:19.053 16:25:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:19.053 16:25:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:19.053 16:25:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:19.053 16:25:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:19.053 16:25:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:19.053 16:25:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:48:19.053 16:25:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:48:19.053 [2024-07-22 16:25:23.310502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:19.053 [2024-07-22 16:25:23.310863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91194 ] 00:48:19.311 [2024-07-22 16:25:23.490968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:19.569 [2024-07-22 16:25:23.756543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:19.827 [2024-07-22 16:25:24.087181] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:48:19.827 [2024-07-22 16:25:24.087268] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:48:19.827 [2024-07-22 16:25:24.087295] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:20.762 [2024-07-22 16:25:24.902648] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:48:21.328 16:25:25 -- common/autotest_common.sh@643 -- # es=236 00:48:21.328 16:25:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:21.328 ************************************ 00:48:21.328 END TEST dd_flag_directory_forced_aio 00:48:21.328 ************************************ 00:48:21.328 16:25:25 -- common/autotest_common.sh@652 -- # es=108 00:48:21.328 16:25:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:48:21.328 16:25:25 -- common/autotest_common.sh@660 -- # es=1 00:48:21.328 16:25:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:21.328 00:48:21.328 real 0m4.359s 00:48:21.328 user 0m3.512s 00:48:21.328 sys 0m0.644s 00:48:21.328 16:25:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:21.328 16:25:25 -- common/autotest_common.sh@10 -- # set +x 00:48:21.328 16:25:25 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:48:21.328 16:25:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:21.328 16:25:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:21.328 16:25:25 -- common/autotest_common.sh@10 -- # set +x 00:48:21.328 ************************************ 00:48:21.328 START TEST dd_flag_nofollow_forced_aio 00:48:21.328 ************************************ 00:48:21.328 16:25:25 -- common/autotest_common.sh@1104 -- # nofollow 00:48:21.328 16:25:25 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:48:21.328 16:25:25 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:48:21.328 16:25:25 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:48:21.328 16:25:25 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:48:21.328 16:25:25 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:21.328 16:25:25 -- common/autotest_common.sh@640 -- # local es=0 00:48:21.328 16:25:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:21.328 16:25:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:21.328 16:25:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:21.328 16:25:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:21.328 16:25:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:21.328 16:25:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:21.328 16:25:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:21.328 16:25:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:21.328 16:25:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:48:21.328 16:25:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:21.328 [2024-07-22 16:25:25.531373] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:21.328 [2024-07-22 16:25:25.531546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91236 ] 00:48:21.586 [2024-07-22 16:25:25.712123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:21.843 [2024-07-22 16:25:25.977264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:22.100 [2024-07-22 16:25:26.310436] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:48:22.100 [2024-07-22 16:25:26.310531] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:48:22.100 [2024-07-22 16:25:26.310558] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:23.077 [2024-07-22 16:25:27.125439] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:48:23.642 16:25:27 -- common/autotest_common.sh@643 -- # es=216 00:48:23.642 16:25:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:23.642 16:25:27 -- common/autotest_common.sh@652 -- # es=88 00:48:23.642 16:25:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:48:23.642 16:25:27 -- common/autotest_common.sh@660 -- # es=1 00:48:23.642 16:25:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:23.642 16:25:27 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:48:23.642 16:25:27 -- common/autotest_common.sh@640 -- # local es=0 00:48:23.642 16:25:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:48:23.642 16:25:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:23.642 16:25:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:23.642 16:25:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:23.642 16:25:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:23.642 16:25:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:23.642 16:25:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:48:23.642 16:25:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:23.642 16:25:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:48:23.642 16:25:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:48:23.642 [2024-07-22 16:25:27.688132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:23.643 [2024-07-22 16:25:27.688276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91263 ] 00:48:23.643 [2024-07-22 16:25:27.855884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:23.901 [2024-07-22 16:25:28.121745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:24.468 [2024-07-22 16:25:28.461287] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:48:24.468 [2024-07-22 16:25:28.461420] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:48:24.468 [2024-07-22 16:25:28.461459] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:25.034 [2024-07-22 16:25:29.268247] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:48:25.598 16:25:29 -- common/autotest_common.sh@643 -- # es=216 00:48:25.598 16:25:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:48:25.598 16:25:29 -- common/autotest_common.sh@652 -- # es=88 00:48:25.598 16:25:29 -- common/autotest_common.sh@653 -- # case "$es" in 00:48:25.598 16:25:29 -- common/autotest_common.sh@660 -- # es=1 00:48:25.599 16:25:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:48:25.599 16:25:29 -- dd/posix.sh@46 -- # gen_bytes 512 00:48:25.599 16:25:29 -- dd/common.sh@98 -- # xtrace_disable 00:48:25.599 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:48:25.599 16:25:29 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:25.599 [2024-07-22 16:25:29.815056] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:25.599 [2024-07-22 16:25:29.815256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91289 ] 00:48:25.857 [2024-07-22 16:25:29.991502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:26.114 [2024-07-22 16:25:30.251625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:27.745  Copying: 512/512 [B] (average 500 kBps) 00:48:27.745 00:48:27.745 ************************************ 00:48:27.745 END TEST dd_flag_nofollow_forced_aio 00:48:27.745 ************************************ 00:48:27.746 16:25:31 -- dd/posix.sh@49 -- # [[ u1mojxho9teobzahz5qedu2g5gdsthrdhkum1s67sc2fvez4sgk7b8wrmpq987vfqdilvq46e5s3bywxdqot968femyfg5kz810sw5wxsbz7h2uzx6trv8p3g9cv7getrw6b4jqzdawjj9ciiabtwuxd9921mcgugc1xq4ljchvhtweewg59f64rlquvp89zelcxa36fcxthietcmkoygzzvaqa1gl1013vcmd4se2baqvlsdh9thf072e68r5e7sf9t4c5uicagpctdotn84xn8cny80tr6rbsvhtvq4cn1xpwy25ehwppbg0x2odfm5806rsq66mt8fdwzoesiekile9njqrp78vhmmsf1721lpq4lpnhy4ycurm38uoae2vqowkm77kdj92fn8g38m22e04y041jbt89zwlxxnw8budsrjelqxacmbcxi60xi2yaercjr813p8znsfwl40ebfyfd3kg5cy7nif99e3hbt4xw5m7f84956yclye101 == \u\1\m\o\j\x\h\o\9\t\e\o\b\z\a\h\z\5\q\e\d\u\2\g\5\g\d\s\t\h\r\d\h\k\u\m\1\s\6\7\s\c\2\f\v\e\z\4\s\g\k\7\b\8\w\r\m\p\q\9\8\7\v\f\q\d\i\l\v\q\4\6\e\5\s\3\b\y\w\x\d\q\o\t\9\6\8\f\e\m\y\f\g\5\k\z\8\1\0\s\w\5\w\x\s\b\z\7\h\2\u\z\x\6\t\r\v\8\p\3\g\9\c\v\7\g\e\t\r\w\6\b\4\j\q\z\d\a\w\j\j\9\c\i\i\a\b\t\w\u\x\d\9\9\2\1\m\c\g\u\g\c\1\x\q\4\l\j\c\h\v\h\t\w\e\e\w\g\5\9\f\6\4\r\l\q\u\v\p\8\9\z\e\l\c\x\a\3\6\f\c\x\t\h\i\e\t\c\m\k\o\y\g\z\z\v\a\q\a\1\g\l\1\0\1\3\v\c\m\d\4\s\e\2\b\a\q\v\l\s\d\h\9\t\h\f\0\7\2\e\6\8\r\5\e\7\s\f\9\t\4\c\5\u\i\c\a\g\p\c\t\d\o\t\n\8\4\x\n\8\c\n\y\8\0\t\r\6\r\b\s\v\h\t\v\q\4\c\n\1\x\p\w\y\2\5\e\h\w\p\p\b\g\0\x\2\o\d\f\m\5\8\0\6\r\s\q\6\6\m\t\8\f\d\w\z\o\e\s\i\e\k\i\l\e\9\n\j\q\r\p\7\8\v\h\m\m\s\f\1\7\2\1\l\p\q\4\l\p\n\h\y\4\y\c\u\r\m\3\8\u\o\a\e\2\v\q\o\w\k\m\7\7\k\d\j\9\2\f\n\8\g\3\8\m\2\2\e\0\4\y\0\4\1\j\b\t\8\9\z\w\l\x\x\n\w\8\b\u\d\s\r\j\e\l\q\x\a\c\m\b\c\x\i\6\0\x\i\2\y\a\e\r\c\j\r\8\1\3\p\8\z\n\s\f\w\l\4\0\e\b\f\y\f\d\3\k\g\5\c\y\7\n\i\f\9\9\e\3\h\b\t\4\x\w\5\m\7\f\8\4\9\5\6\y\c\l\y\e\1\0\1 ]] 00:48:27.746 00:48:27.746 real 0m6.421s 00:48:27.746 user 0m5.134s 00:48:27.746 sys 0m0.974s 00:48:27.746 16:25:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:27.746 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:48:27.746 16:25:31 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:48:27.746 16:25:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:27.746 16:25:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:27.746 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:48:27.746 ************************************ 00:48:27.746 START TEST dd_flag_noatime_forced_aio 00:48:27.746 ************************************ 00:48:27.746 16:25:31 -- common/autotest_common.sh@1104 -- # noatime 00:48:27.746 16:25:31 -- dd/posix.sh@53 -- # local atime_if 00:48:27.746 16:25:31 -- dd/posix.sh@54 -- # local atime_of 00:48:27.746 16:25:31 -- dd/posix.sh@58 -- # gen_bytes 512 00:48:27.746 16:25:31 -- dd/common.sh@98 -- # xtrace_disable 00:48:27.746 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:48:27.746 16:25:31 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:27.746 16:25:31 -- dd/posix.sh@60 -- # atime_if=1721665530 00:48:27.746 16:25:31 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:27.746 16:25:31 -- dd/posix.sh@61 -- # atime_of=1721665531 00:48:27.746 16:25:31 -- dd/posix.sh@66 -- # sleep 1 00:48:28.681 16:25:32 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:28.942 [2024-07-22 16:25:33.025773] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:28.942 [2024-07-22 16:25:33.025981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91347 ] 00:48:28.942 [2024-07-22 16:25:33.210413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:29.508 [2024-07-22 16:25:33.504109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:31.143  Copying: 512/512 [B] (average 500 kBps) 00:48:31.143 00:48:31.143 16:25:35 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:31.143 16:25:35 -- dd/posix.sh@69 -- # (( atime_if == 1721665530 )) 00:48:31.143 16:25:35 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:31.143 16:25:35 -- dd/posix.sh@70 -- # (( atime_of == 1721665531 )) 00:48:31.143 16:25:35 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:48:31.143 [2024-07-22 16:25:35.261170] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:31.143 [2024-07-22 16:25:35.262241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91371 ] 00:48:31.401 [2024-07-22 16:25:35.457073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:31.659 [2024-07-22 16:25:35.735560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:33.329  Copying: 512/512 [B] (average 500 kBps) 00:48:33.329 00:48:33.329 16:25:37 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:48:33.329 ************************************ 00:48:33.329 END TEST dd_flag_noatime_forced_aio 00:48:33.329 ************************************ 00:48:33.329 16:25:37 -- dd/posix.sh@73 -- # (( atime_if < 1721665536 )) 00:48:33.329 00:48:33.329 real 0m5.504s 00:48:33.329 user 0m3.555s 00:48:33.329 sys 0m0.719s 00:48:33.329 16:25:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:33.329 16:25:37 -- common/autotest_common.sh@10 -- # set +x 00:48:33.329 16:25:37 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:48:33.329 16:25:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:33.329 16:25:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:33.329 16:25:37 -- common/autotest_common.sh@10 -- # set +x 00:48:33.329 ************************************ 00:48:33.329 START TEST dd_flags_misc_forced_aio 00:48:33.329 ************************************ 00:48:33.329 16:25:37 -- common/autotest_common.sh@1104 -- # io 00:48:33.329 16:25:37 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:48:33.329 16:25:37 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:48:33.329 16:25:37 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:48:33.329 16:25:37 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:48:33.329 16:25:37 -- dd/posix.sh@86 -- # gen_bytes 512 00:48:33.329 16:25:37 -- dd/common.sh@98 -- # xtrace_disable 00:48:33.329 16:25:37 -- common/autotest_common.sh@10 -- # set +x 00:48:33.329 16:25:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:33.329 16:25:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:48:33.329 [2024-07-22 16:25:37.567835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:33.329 [2024-07-22 16:25:37.568195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91415 ] 00:48:33.587 [2024-07-22 16:25:37.754711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:33.844 [2024-07-22 16:25:38.073751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:35.789  Copying: 512/512 [B] (average 500 kBps) 00:48:35.789 00:48:35.789 16:25:39 -- dd/posix.sh@93 -- # [[ 54l7dz3mwcwfuzdr5l0mn3pm6wcanptzbi2glxgnb3an2qpvtf5tkd8pqzs4bxzr0gv9uzhn0hin5d8wlzr6imvy1kb3ssnhz6qzqteef4i9k55poaxazopjxbq1njz6g5h9cirt91bje8pmrqkshrluysr9mflwt2c48ctiv88mume719xi9q2e5xin69talegozddkcq6msoj4rm0bgf8m34vjn3myrsdugrhcuvexbojbitrx7kteumouhcsrx4r7i5z69rh8o8ny88o7x4kmzrngccq0ti5w82607eat5veos8w8p6ow2ct3l4afzoj088203fc4y4k8v6hecug3ut3fqnau5ijs4kfjnhars3oqwqhawz1b9nkjkt29ey3tkjf10uu36k1bfktdpb9uja5rw6yv9isqgpn0x9sde414riiya960djqkq7t6vi9kq694qg8r0e53ona2qljgtxvn2qls4pqtc840pyss80dh4i46z4tuvmjfzb39 == \5\4\l\7\d\z\3\m\w\c\w\f\u\z\d\r\5\l\0\m\n\3\p\m\6\w\c\a\n\p\t\z\b\i\2\g\l\x\g\n\b\3\a\n\2\q\p\v\t\f\5\t\k\d\8\p\q\z\s\4\b\x\z\r\0\g\v\9\u\z\h\n\0\h\i\n\5\d\8\w\l\z\r\6\i\m\v\y\1\k\b\3\s\s\n\h\z\6\q\z\q\t\e\e\f\4\i\9\k\5\5\p\o\a\x\a\z\o\p\j\x\b\q\1\n\j\z\6\g\5\h\9\c\i\r\t\9\1\b\j\e\8\p\m\r\q\k\s\h\r\l\u\y\s\r\9\m\f\l\w\t\2\c\4\8\c\t\i\v\8\8\m\u\m\e\7\1\9\x\i\9\q\2\e\5\x\i\n\6\9\t\a\l\e\g\o\z\d\d\k\c\q\6\m\s\o\j\4\r\m\0\b\g\f\8\m\3\4\v\j\n\3\m\y\r\s\d\u\g\r\h\c\u\v\e\x\b\o\j\b\i\t\r\x\7\k\t\e\u\m\o\u\h\c\s\r\x\4\r\7\i\5\z\6\9\r\h\8\o\8\n\y\8\8\o\7\x\4\k\m\z\r\n\g\c\c\q\0\t\i\5\w\8\2\6\0\7\e\a\t\5\v\e\o\s\8\w\8\p\6\o\w\2\c\t\3\l\4\a\f\z\o\j\0\8\8\2\0\3\f\c\4\y\4\k\8\v\6\h\e\c\u\g\3\u\t\3\f\q\n\a\u\5\i\j\s\4\k\f\j\n\h\a\r\s\3\o\q\w\q\h\a\w\z\1\b\9\n\k\j\k\t\2\9\e\y\3\t\k\j\f\1\0\u\u\3\6\k\1\b\f\k\t\d\p\b\9\u\j\a\5\r\w\6\y\v\9\i\s\q\g\p\n\0\x\9\s\d\e\4\1\4\r\i\i\y\a\9\6\0\d\j\q\k\q\7\t\6\v\i\9\k\q\6\9\4\q\g\8\r\0\e\5\3\o\n\a\2\q\l\j\g\t\x\v\n\2\q\l\s\4\p\q\t\c\8\4\0\p\y\s\s\8\0\d\h\4\i\4\6\z\4\t\u\v\m\j\f\z\b\3\9 ]] 00:48:35.789 16:25:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:35.789 16:25:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:48:35.789 [2024-07-22 16:25:39.901879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:35.789 [2024-07-22 16:25:39.902091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91440 ] 00:48:36.047 [2024-07-22 16:25:40.075942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:36.305 [2024-07-22 16:25:40.333666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:37.945  Copying: 512/512 [B] (average 500 kBps) 00:48:37.945 00:48:37.945 16:25:41 -- dd/posix.sh@93 -- # [[ 54l7dz3mwcwfuzdr5l0mn3pm6wcanptzbi2glxgnb3an2qpvtf5tkd8pqzs4bxzr0gv9uzhn0hin5d8wlzr6imvy1kb3ssnhz6qzqteef4i9k55poaxazopjxbq1njz6g5h9cirt91bje8pmrqkshrluysr9mflwt2c48ctiv88mume719xi9q2e5xin69talegozddkcq6msoj4rm0bgf8m34vjn3myrsdugrhcuvexbojbitrx7kteumouhcsrx4r7i5z69rh8o8ny88o7x4kmzrngccq0ti5w82607eat5veos8w8p6ow2ct3l4afzoj088203fc4y4k8v6hecug3ut3fqnau5ijs4kfjnhars3oqwqhawz1b9nkjkt29ey3tkjf10uu36k1bfktdpb9uja5rw6yv9isqgpn0x9sde414riiya960djqkq7t6vi9kq694qg8r0e53ona2qljgtxvn2qls4pqtc840pyss80dh4i46z4tuvmjfzb39 == \5\4\l\7\d\z\3\m\w\c\w\f\u\z\d\r\5\l\0\m\n\3\p\m\6\w\c\a\n\p\t\z\b\i\2\g\l\x\g\n\b\3\a\n\2\q\p\v\t\f\5\t\k\d\8\p\q\z\s\4\b\x\z\r\0\g\v\9\u\z\h\n\0\h\i\n\5\d\8\w\l\z\r\6\i\m\v\y\1\k\b\3\s\s\n\h\z\6\q\z\q\t\e\e\f\4\i\9\k\5\5\p\o\a\x\a\z\o\p\j\x\b\q\1\n\j\z\6\g\5\h\9\c\i\r\t\9\1\b\j\e\8\p\m\r\q\k\s\h\r\l\u\y\s\r\9\m\f\l\w\t\2\c\4\8\c\t\i\v\8\8\m\u\m\e\7\1\9\x\i\9\q\2\e\5\x\i\n\6\9\t\a\l\e\g\o\z\d\d\k\c\q\6\m\s\o\j\4\r\m\0\b\g\f\8\m\3\4\v\j\n\3\m\y\r\s\d\u\g\r\h\c\u\v\e\x\b\o\j\b\i\t\r\x\7\k\t\e\u\m\o\u\h\c\s\r\x\4\r\7\i\5\z\6\9\r\h\8\o\8\n\y\8\8\o\7\x\4\k\m\z\r\n\g\c\c\q\0\t\i\5\w\8\2\6\0\7\e\a\t\5\v\e\o\s\8\w\8\p\6\o\w\2\c\t\3\l\4\a\f\z\o\j\0\8\8\2\0\3\f\c\4\y\4\k\8\v\6\h\e\c\u\g\3\u\t\3\f\q\n\a\u\5\i\j\s\4\k\f\j\n\h\a\r\s\3\o\q\w\q\h\a\w\z\1\b\9\n\k\j\k\t\2\9\e\y\3\t\k\j\f\1\0\u\u\3\6\k\1\b\f\k\t\d\p\b\9\u\j\a\5\r\w\6\y\v\9\i\s\q\g\p\n\0\x\9\s\d\e\4\1\4\r\i\i\y\a\9\6\0\d\j\q\k\q\7\t\6\v\i\9\k\q\6\9\4\q\g\8\r\0\e\5\3\o\n\a\2\q\l\j\g\t\x\v\n\2\q\l\s\4\p\q\t\c\8\4\0\p\y\s\s\8\0\d\h\4\i\4\6\z\4\t\u\v\m\j\f\z\b\3\9 ]] 00:48:37.945 16:25:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:37.945 16:25:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:48:37.945 [2024-07-22 16:25:41.981065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:37.945 [2024-07-22 16:25:41.981243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91461 ] 00:48:37.945 [2024-07-22 16:25:42.163123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:38.204 [2024-07-22 16:25:42.452547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:40.149  Copying: 512/512 [B] (average 83 kBps) 00:48:40.149 00:48:40.149 16:25:44 -- dd/posix.sh@93 -- # [[ 54l7dz3mwcwfuzdr5l0mn3pm6wcanptzbi2glxgnb3an2qpvtf5tkd8pqzs4bxzr0gv9uzhn0hin5d8wlzr6imvy1kb3ssnhz6qzqteef4i9k55poaxazopjxbq1njz6g5h9cirt91bje8pmrqkshrluysr9mflwt2c48ctiv88mume719xi9q2e5xin69talegozddkcq6msoj4rm0bgf8m34vjn3myrsdugrhcuvexbojbitrx7kteumouhcsrx4r7i5z69rh8o8ny88o7x4kmzrngccq0ti5w82607eat5veos8w8p6ow2ct3l4afzoj088203fc4y4k8v6hecug3ut3fqnau5ijs4kfjnhars3oqwqhawz1b9nkjkt29ey3tkjf10uu36k1bfktdpb9uja5rw6yv9isqgpn0x9sde414riiya960djqkq7t6vi9kq694qg8r0e53ona2qljgtxvn2qls4pqtc840pyss80dh4i46z4tuvmjfzb39 == \5\4\l\7\d\z\3\m\w\c\w\f\u\z\d\r\5\l\0\m\n\3\p\m\6\w\c\a\n\p\t\z\b\i\2\g\l\x\g\n\b\3\a\n\2\q\p\v\t\f\5\t\k\d\8\p\q\z\s\4\b\x\z\r\0\g\v\9\u\z\h\n\0\h\i\n\5\d\8\w\l\z\r\6\i\m\v\y\1\k\b\3\s\s\n\h\z\6\q\z\q\t\e\e\f\4\i\9\k\5\5\p\o\a\x\a\z\o\p\j\x\b\q\1\n\j\z\6\g\5\h\9\c\i\r\t\9\1\b\j\e\8\p\m\r\q\k\s\h\r\l\u\y\s\r\9\m\f\l\w\t\2\c\4\8\c\t\i\v\8\8\m\u\m\e\7\1\9\x\i\9\q\2\e\5\x\i\n\6\9\t\a\l\e\g\o\z\d\d\k\c\q\6\m\s\o\j\4\r\m\0\b\g\f\8\m\3\4\v\j\n\3\m\y\r\s\d\u\g\r\h\c\u\v\e\x\b\o\j\b\i\t\r\x\7\k\t\e\u\m\o\u\h\c\s\r\x\4\r\7\i\5\z\6\9\r\h\8\o\8\n\y\8\8\o\7\x\4\k\m\z\r\n\g\c\c\q\0\t\i\5\w\8\2\6\0\7\e\a\t\5\v\e\o\s\8\w\8\p\6\o\w\2\c\t\3\l\4\a\f\z\o\j\0\8\8\2\0\3\f\c\4\y\4\k\8\v\6\h\e\c\u\g\3\u\t\3\f\q\n\a\u\5\i\j\s\4\k\f\j\n\h\a\r\s\3\o\q\w\q\h\a\w\z\1\b\9\n\k\j\k\t\2\9\e\y\3\t\k\j\f\1\0\u\u\3\6\k\1\b\f\k\t\d\p\b\9\u\j\a\5\r\w\6\y\v\9\i\s\q\g\p\n\0\x\9\s\d\e\4\1\4\r\i\i\y\a\9\6\0\d\j\q\k\q\7\t\6\v\i\9\k\q\6\9\4\q\g\8\r\0\e\5\3\o\n\a\2\q\l\j\g\t\x\v\n\2\q\l\s\4\p\q\t\c\8\4\0\p\y\s\s\8\0\d\h\4\i\4\6\z\4\t\u\v\m\j\f\z\b\3\9 ]] 00:48:40.149 16:25:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:40.149 16:25:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:48:40.149 [2024-07-22 16:25:44.162123] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:40.149 [2024-07-22 16:25:44.162297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91491 ] 00:48:40.149 [2024-07-22 16:25:44.342462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:40.408 [2024-07-22 16:25:44.642538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:42.351  Copying: 512/512 [B] (average 125 kBps) 00:48:42.351 00:48:42.351 16:25:46 -- dd/posix.sh@93 -- # [[ 54l7dz3mwcwfuzdr5l0mn3pm6wcanptzbi2glxgnb3an2qpvtf5tkd8pqzs4bxzr0gv9uzhn0hin5d8wlzr6imvy1kb3ssnhz6qzqteef4i9k55poaxazopjxbq1njz6g5h9cirt91bje8pmrqkshrluysr9mflwt2c48ctiv88mume719xi9q2e5xin69talegozddkcq6msoj4rm0bgf8m34vjn3myrsdugrhcuvexbojbitrx7kteumouhcsrx4r7i5z69rh8o8ny88o7x4kmzrngccq0ti5w82607eat5veos8w8p6ow2ct3l4afzoj088203fc4y4k8v6hecug3ut3fqnau5ijs4kfjnhars3oqwqhawz1b9nkjkt29ey3tkjf10uu36k1bfktdpb9uja5rw6yv9isqgpn0x9sde414riiya960djqkq7t6vi9kq694qg8r0e53ona2qljgtxvn2qls4pqtc840pyss80dh4i46z4tuvmjfzb39 == \5\4\l\7\d\z\3\m\w\c\w\f\u\z\d\r\5\l\0\m\n\3\p\m\6\w\c\a\n\p\t\z\b\i\2\g\l\x\g\n\b\3\a\n\2\q\p\v\t\f\5\t\k\d\8\p\q\z\s\4\b\x\z\r\0\g\v\9\u\z\h\n\0\h\i\n\5\d\8\w\l\z\r\6\i\m\v\y\1\k\b\3\s\s\n\h\z\6\q\z\q\t\e\e\f\4\i\9\k\5\5\p\o\a\x\a\z\o\p\j\x\b\q\1\n\j\z\6\g\5\h\9\c\i\r\t\9\1\b\j\e\8\p\m\r\q\k\s\h\r\l\u\y\s\r\9\m\f\l\w\t\2\c\4\8\c\t\i\v\8\8\m\u\m\e\7\1\9\x\i\9\q\2\e\5\x\i\n\6\9\t\a\l\e\g\o\z\d\d\k\c\q\6\m\s\o\j\4\r\m\0\b\g\f\8\m\3\4\v\j\n\3\m\y\r\s\d\u\g\r\h\c\u\v\e\x\b\o\j\b\i\t\r\x\7\k\t\e\u\m\o\u\h\c\s\r\x\4\r\7\i\5\z\6\9\r\h\8\o\8\n\y\8\8\o\7\x\4\k\m\z\r\n\g\c\c\q\0\t\i\5\w\8\2\6\0\7\e\a\t\5\v\e\o\s\8\w\8\p\6\o\w\2\c\t\3\l\4\a\f\z\o\j\0\8\8\2\0\3\f\c\4\y\4\k\8\v\6\h\e\c\u\g\3\u\t\3\f\q\n\a\u\5\i\j\s\4\k\f\j\n\h\a\r\s\3\o\q\w\q\h\a\w\z\1\b\9\n\k\j\k\t\2\9\e\y\3\t\k\j\f\1\0\u\u\3\6\k\1\b\f\k\t\d\p\b\9\u\j\a\5\r\w\6\y\v\9\i\s\q\g\p\n\0\x\9\s\d\e\4\1\4\r\i\i\y\a\9\6\0\d\j\q\k\q\7\t\6\v\i\9\k\q\6\9\4\q\g\8\r\0\e\5\3\o\n\a\2\q\l\j\g\t\x\v\n\2\q\l\s\4\p\q\t\c\8\4\0\p\y\s\s\8\0\d\h\4\i\4\6\z\4\t\u\v\m\j\f\z\b\3\9 ]] 00:48:42.351 16:25:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:48:42.351 16:25:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:48:42.351 16:25:46 -- dd/common.sh@98 -- # xtrace_disable 00:48:42.351 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:48:42.351 16:25:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:42.351 16:25:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:48:42.351 [2024-07-22 16:25:46.409239] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:42.351 [2024-07-22 16:25:46.409432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91515 ] 00:48:42.351 [2024-07-22 16:25:46.590607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:42.610 [2024-07-22 16:25:46.860681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:44.551  Copying: 512/512 [B] (average 500 kBps) 00:48:44.551 00:48:44.551 16:25:48 -- dd/posix.sh@93 -- # [[ gq303umc50k4hoqraizjh4gr20ke5kbths9b4e4xnjs1kxy50yd2vrrb3c0e7lwu6ww5ag24fs3g8eerlvsbf449huq8kh72njjlu359vi8illd3paefwuxs2qc7987xtv6wz8633hptw0pamshv5rd4oqwhbzz61pdyn31k9h3juubnvwyx7iz15bv2dush45uvf84v5j2l1zode8myjheb2i3hacgrjd96jmsmdjw8048k6h21vpwdjpj5b91m12p7parlhz4ogp1gtzwhqajw9ujv4fpdl7ghpjozjwsoqw475gbwp1bf5q263w35q7amlzczwckq9lqmogf930aaklxh8ovg2ync2mv68g2wi8mnpepjjpvqv4xijchepzy86s7d3zbcvy3m96i0w5xrkg5j3xbx55nvobhfsxhp3gr94tkstew7y5r0lkxoj9qktwu8h4h9av35hih9w02x7s3olcxrol1x2agrsnrh5xeq3dxvfzrdeyhac23a == \g\q\3\0\3\u\m\c\5\0\k\4\h\o\q\r\a\i\z\j\h\4\g\r\2\0\k\e\5\k\b\t\h\s\9\b\4\e\4\x\n\j\s\1\k\x\y\5\0\y\d\2\v\r\r\b\3\c\0\e\7\l\w\u\6\w\w\5\a\g\2\4\f\s\3\g\8\e\e\r\l\v\s\b\f\4\4\9\h\u\q\8\k\h\7\2\n\j\j\l\u\3\5\9\v\i\8\i\l\l\d\3\p\a\e\f\w\u\x\s\2\q\c\7\9\8\7\x\t\v\6\w\z\8\6\3\3\h\p\t\w\0\p\a\m\s\h\v\5\r\d\4\o\q\w\h\b\z\z\6\1\p\d\y\n\3\1\k\9\h\3\j\u\u\b\n\v\w\y\x\7\i\z\1\5\b\v\2\d\u\s\h\4\5\u\v\f\8\4\v\5\j\2\l\1\z\o\d\e\8\m\y\j\h\e\b\2\i\3\h\a\c\g\r\j\d\9\6\j\m\s\m\d\j\w\8\0\4\8\k\6\h\2\1\v\p\w\d\j\p\j\5\b\9\1\m\1\2\p\7\p\a\r\l\h\z\4\o\g\p\1\g\t\z\w\h\q\a\j\w\9\u\j\v\4\f\p\d\l\7\g\h\p\j\o\z\j\w\s\o\q\w\4\7\5\g\b\w\p\1\b\f\5\q\2\6\3\w\3\5\q\7\a\m\l\z\c\z\w\c\k\q\9\l\q\m\o\g\f\9\3\0\a\a\k\l\x\h\8\o\v\g\2\y\n\c\2\m\v\6\8\g\2\w\i\8\m\n\p\e\p\j\j\p\v\q\v\4\x\i\j\c\h\e\p\z\y\8\6\s\7\d\3\z\b\c\v\y\3\m\9\6\i\0\w\5\x\r\k\g\5\j\3\x\b\x\5\5\n\v\o\b\h\f\s\x\h\p\3\g\r\9\4\t\k\s\t\e\w\7\y\5\r\0\l\k\x\o\j\9\q\k\t\w\u\8\h\4\h\9\a\v\3\5\h\i\h\9\w\0\2\x\7\s\3\o\l\c\x\r\o\l\1\x\2\a\g\r\s\n\r\h\5\x\e\q\3\d\x\v\f\z\r\d\e\y\h\a\c\2\3\a ]] 00:48:44.551 16:25:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:44.551 16:25:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:48:44.551 [2024-07-22 16:25:48.760954] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:44.551 [2024-07-22 16:25:48.761152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91536 ] 00:48:44.809 [2024-07-22 16:25:48.938350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:45.067 [2024-07-22 16:25:49.205071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:46.699  Copying: 512/512 [B] (average 500 kBps) 00:48:46.699 00:48:46.699 16:25:50 -- dd/posix.sh@93 -- # [[ gq303umc50k4hoqraizjh4gr20ke5kbths9b4e4xnjs1kxy50yd2vrrb3c0e7lwu6ww5ag24fs3g8eerlvsbf449huq8kh72njjlu359vi8illd3paefwuxs2qc7987xtv6wz8633hptw0pamshv5rd4oqwhbzz61pdyn31k9h3juubnvwyx7iz15bv2dush45uvf84v5j2l1zode8myjheb2i3hacgrjd96jmsmdjw8048k6h21vpwdjpj5b91m12p7parlhz4ogp1gtzwhqajw9ujv4fpdl7ghpjozjwsoqw475gbwp1bf5q263w35q7amlzczwckq9lqmogf930aaklxh8ovg2ync2mv68g2wi8mnpepjjpvqv4xijchepzy86s7d3zbcvy3m96i0w5xrkg5j3xbx55nvobhfsxhp3gr94tkstew7y5r0lkxoj9qktwu8h4h9av35hih9w02x7s3olcxrol1x2agrsnrh5xeq3dxvfzrdeyhac23a == \g\q\3\0\3\u\m\c\5\0\k\4\h\o\q\r\a\i\z\j\h\4\g\r\2\0\k\e\5\k\b\t\h\s\9\b\4\e\4\x\n\j\s\1\k\x\y\5\0\y\d\2\v\r\r\b\3\c\0\e\7\l\w\u\6\w\w\5\a\g\2\4\f\s\3\g\8\e\e\r\l\v\s\b\f\4\4\9\h\u\q\8\k\h\7\2\n\j\j\l\u\3\5\9\v\i\8\i\l\l\d\3\p\a\e\f\w\u\x\s\2\q\c\7\9\8\7\x\t\v\6\w\z\8\6\3\3\h\p\t\w\0\p\a\m\s\h\v\5\r\d\4\o\q\w\h\b\z\z\6\1\p\d\y\n\3\1\k\9\h\3\j\u\u\b\n\v\w\y\x\7\i\z\1\5\b\v\2\d\u\s\h\4\5\u\v\f\8\4\v\5\j\2\l\1\z\o\d\e\8\m\y\j\h\e\b\2\i\3\h\a\c\g\r\j\d\9\6\j\m\s\m\d\j\w\8\0\4\8\k\6\h\2\1\v\p\w\d\j\p\j\5\b\9\1\m\1\2\p\7\p\a\r\l\h\z\4\o\g\p\1\g\t\z\w\h\q\a\j\w\9\u\j\v\4\f\p\d\l\7\g\h\p\j\o\z\j\w\s\o\q\w\4\7\5\g\b\w\p\1\b\f\5\q\2\6\3\w\3\5\q\7\a\m\l\z\c\z\w\c\k\q\9\l\q\m\o\g\f\9\3\0\a\a\k\l\x\h\8\o\v\g\2\y\n\c\2\m\v\6\8\g\2\w\i\8\m\n\p\e\p\j\j\p\v\q\v\4\x\i\j\c\h\e\p\z\y\8\6\s\7\d\3\z\b\c\v\y\3\m\9\6\i\0\w\5\x\r\k\g\5\j\3\x\b\x\5\5\n\v\o\b\h\f\s\x\h\p\3\g\r\9\4\t\k\s\t\e\w\7\y\5\r\0\l\k\x\o\j\9\q\k\t\w\u\8\h\4\h\9\a\v\3\5\h\i\h\9\w\0\2\x\7\s\3\o\l\c\x\r\o\l\1\x\2\a\g\r\s\n\r\h\5\x\e\q\3\d\x\v\f\z\r\d\e\y\h\a\c\2\3\a ]] 00:48:46.699 16:25:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:46.699 16:25:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:48:46.699 [2024-07-22 16:25:50.965714] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:46.699 [2024-07-22 16:25:50.965879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91561 ] 00:48:46.956 [2024-07-22 16:25:51.144899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:47.213 [2024-07-22 16:25:51.453074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:49.171  Copying: 512/512 [B] (average 125 kBps) 00:48:49.171 00:48:49.171 16:25:53 -- dd/posix.sh@93 -- # [[ gq303umc50k4hoqraizjh4gr20ke5kbths9b4e4xnjs1kxy50yd2vrrb3c0e7lwu6ww5ag24fs3g8eerlvsbf449huq8kh72njjlu359vi8illd3paefwuxs2qc7987xtv6wz8633hptw0pamshv5rd4oqwhbzz61pdyn31k9h3juubnvwyx7iz15bv2dush45uvf84v5j2l1zode8myjheb2i3hacgrjd96jmsmdjw8048k6h21vpwdjpj5b91m12p7parlhz4ogp1gtzwhqajw9ujv4fpdl7ghpjozjwsoqw475gbwp1bf5q263w35q7amlzczwckq9lqmogf930aaklxh8ovg2ync2mv68g2wi8mnpepjjpvqv4xijchepzy86s7d3zbcvy3m96i0w5xrkg5j3xbx55nvobhfsxhp3gr94tkstew7y5r0lkxoj9qktwu8h4h9av35hih9w02x7s3olcxrol1x2agrsnrh5xeq3dxvfzrdeyhac23a == \g\q\3\0\3\u\m\c\5\0\k\4\h\o\q\r\a\i\z\j\h\4\g\r\2\0\k\e\5\k\b\t\h\s\9\b\4\e\4\x\n\j\s\1\k\x\y\5\0\y\d\2\v\r\r\b\3\c\0\e\7\l\w\u\6\w\w\5\a\g\2\4\f\s\3\g\8\e\e\r\l\v\s\b\f\4\4\9\h\u\q\8\k\h\7\2\n\j\j\l\u\3\5\9\v\i\8\i\l\l\d\3\p\a\e\f\w\u\x\s\2\q\c\7\9\8\7\x\t\v\6\w\z\8\6\3\3\h\p\t\w\0\p\a\m\s\h\v\5\r\d\4\o\q\w\h\b\z\z\6\1\p\d\y\n\3\1\k\9\h\3\j\u\u\b\n\v\w\y\x\7\i\z\1\5\b\v\2\d\u\s\h\4\5\u\v\f\8\4\v\5\j\2\l\1\z\o\d\e\8\m\y\j\h\e\b\2\i\3\h\a\c\g\r\j\d\9\6\j\m\s\m\d\j\w\8\0\4\8\k\6\h\2\1\v\p\w\d\j\p\j\5\b\9\1\m\1\2\p\7\p\a\r\l\h\z\4\o\g\p\1\g\t\z\w\h\q\a\j\w\9\u\j\v\4\f\p\d\l\7\g\h\p\j\o\z\j\w\s\o\q\w\4\7\5\g\b\w\p\1\b\f\5\q\2\6\3\w\3\5\q\7\a\m\l\z\c\z\w\c\k\q\9\l\q\m\o\g\f\9\3\0\a\a\k\l\x\h\8\o\v\g\2\y\n\c\2\m\v\6\8\g\2\w\i\8\m\n\p\e\p\j\j\p\v\q\v\4\x\i\j\c\h\e\p\z\y\8\6\s\7\d\3\z\b\c\v\y\3\m\9\6\i\0\w\5\x\r\k\g\5\j\3\x\b\x\5\5\n\v\o\b\h\f\s\x\h\p\3\g\r\9\4\t\k\s\t\e\w\7\y\5\r\0\l\k\x\o\j\9\q\k\t\w\u\8\h\4\h\9\a\v\3\5\h\i\h\9\w\0\2\x\7\s\3\o\l\c\x\r\o\l\1\x\2\a\g\r\s\n\r\h\5\x\e\q\3\d\x\v\f\z\r\d\e\y\h\a\c\2\3\a ]] 00:48:49.171 16:25:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:48:49.171 16:25:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:48:49.171 [2024-07-22 16:25:53.245202] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:49.171 [2024-07-22 16:25:53.245373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91587 ] 00:48:49.171 [2024-07-22 16:25:53.417869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:49.429 [2024-07-22 16:25:53.701073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:51.369  Copying: 512/512 [B] (average 125 kBps) 00:48:51.369 00:48:51.369 16:25:55 -- dd/posix.sh@93 -- # [[ gq303umc50k4hoqraizjh4gr20ke5kbths9b4e4xnjs1kxy50yd2vrrb3c0e7lwu6ww5ag24fs3g8eerlvsbf449huq8kh72njjlu359vi8illd3paefwuxs2qc7987xtv6wz8633hptw0pamshv5rd4oqwhbzz61pdyn31k9h3juubnvwyx7iz15bv2dush45uvf84v5j2l1zode8myjheb2i3hacgrjd96jmsmdjw8048k6h21vpwdjpj5b91m12p7parlhz4ogp1gtzwhqajw9ujv4fpdl7ghpjozjwsoqw475gbwp1bf5q263w35q7amlzczwckq9lqmogf930aaklxh8ovg2ync2mv68g2wi8mnpepjjpvqv4xijchepzy86s7d3zbcvy3m96i0w5xrkg5j3xbx55nvobhfsxhp3gr94tkstew7y5r0lkxoj9qktwu8h4h9av35hih9w02x7s3olcxrol1x2agrsnrh5xeq3dxvfzrdeyhac23a == \g\q\3\0\3\u\m\c\5\0\k\4\h\o\q\r\a\i\z\j\h\4\g\r\2\0\k\e\5\k\b\t\h\s\9\b\4\e\4\x\n\j\s\1\k\x\y\5\0\y\d\2\v\r\r\b\3\c\0\e\7\l\w\u\6\w\w\5\a\g\2\4\f\s\3\g\8\e\e\r\l\v\s\b\f\4\4\9\h\u\q\8\k\h\7\2\n\j\j\l\u\3\5\9\v\i\8\i\l\l\d\3\p\a\e\f\w\u\x\s\2\q\c\7\9\8\7\x\t\v\6\w\z\8\6\3\3\h\p\t\w\0\p\a\m\s\h\v\5\r\d\4\o\q\w\h\b\z\z\6\1\p\d\y\n\3\1\k\9\h\3\j\u\u\b\n\v\w\y\x\7\i\z\1\5\b\v\2\d\u\s\h\4\5\u\v\f\8\4\v\5\j\2\l\1\z\o\d\e\8\m\y\j\h\e\b\2\i\3\h\a\c\g\r\j\d\9\6\j\m\s\m\d\j\w\8\0\4\8\k\6\h\2\1\v\p\w\d\j\p\j\5\b\9\1\m\1\2\p\7\p\a\r\l\h\z\4\o\g\p\1\g\t\z\w\h\q\a\j\w\9\u\j\v\4\f\p\d\l\7\g\h\p\j\o\z\j\w\s\o\q\w\4\7\5\g\b\w\p\1\b\f\5\q\2\6\3\w\3\5\q\7\a\m\l\z\c\z\w\c\k\q\9\l\q\m\o\g\f\9\3\0\a\a\k\l\x\h\8\o\v\g\2\y\n\c\2\m\v\6\8\g\2\w\i\8\m\n\p\e\p\j\j\p\v\q\v\4\x\i\j\c\h\e\p\z\y\8\6\s\7\d\3\z\b\c\v\y\3\m\9\6\i\0\w\5\x\r\k\g\5\j\3\x\b\x\5\5\n\v\o\b\h\f\s\x\h\p\3\g\r\9\4\t\k\s\t\e\w\7\y\5\r\0\l\k\x\o\j\9\q\k\t\w\u\8\h\4\h\9\a\v\3\5\h\i\h\9\w\0\2\x\7\s\3\o\l\c\x\r\o\l\1\x\2\a\g\r\s\n\r\h\5\x\e\q\3\d\x\v\f\z\r\d\e\y\h\a\c\2\3\a ]] 00:48:51.369 00:48:51.369 real 0m17.949s 00:48:51.369 user 0m14.335s 00:48:51.369 sys 0m2.681s 00:48:51.369 16:25:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:51.369 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:48:51.369 ************************************ 00:48:51.369 END TEST dd_flags_misc_forced_aio 00:48:51.369 ************************************ 00:48:51.369 16:25:55 -- dd/posix.sh@1 -- # cleanup 00:48:51.369 16:25:55 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:48:51.369 16:25:55 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:48:51.369 ************************************ 00:48:51.369 END TEST spdk_dd_posix 00:48:51.369 ************************************ 00:48:51.369 00:48:51.369 real 1m11.759s 00:48:51.369 user 0m55.407s 00:48:51.369 sys 0m10.747s 00:48:51.369 16:25:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:51.369 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:48:51.369 16:25:55 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:48:51.369 16:25:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:51.369 16:25:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:51.369 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:48:51.369 ************************************ 00:48:51.369 START TEST spdk_dd_malloc 00:48:51.369 ************************************ 00:48:51.369 16:25:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:48:51.369 * Looking for test storage... 00:48:51.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:48:51.369 16:25:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:51.369 16:25:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:51.369 16:25:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:51.369 16:25:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:51.369 16:25:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:51.369 16:25:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:51.369 16:25:55 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:51.369 16:25:55 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:51.369 16:25:55 -- paths/export.sh@6 -- # export PATH 00:48:51.369 16:25:55 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:48:51.369 16:25:55 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:48:51.369 16:25:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:48:51.369 16:25:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:48:51.369 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:48:51.369 ************************************ 00:48:51.369 START TEST dd_malloc_copy 00:48:51.369 ************************************ 00:48:51.369 16:25:55 -- common/autotest_common.sh@1104 -- # malloc_copy 00:48:51.369 16:25:55 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:48:51.369 16:25:55 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:48:51.369 16:25:55 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:48:51.369 16:25:55 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:48:51.369 16:25:55 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:48:51.369 16:25:55 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:48:51.369 16:25:55 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:48:51.369 16:25:55 -- dd/malloc.sh@28 -- # gen_conf 00:48:51.369 16:25:55 -- dd/common.sh@31 -- # xtrace_disable 00:48:51.369 16:25:55 -- common/autotest_common.sh@10 -- # set +x 00:48:51.628 { 00:48:51.628 "subsystems": [ 00:48:51.628 { 00:48:51.628 "subsystem": "bdev", 00:48:51.628 "config": [ 00:48:51.628 { 00:48:51.628 "params": { 00:48:51.628 "block_size": 512, 00:48:51.628 "num_blocks": 1048576, 00:48:51.628 "name": "malloc0" 00:48:51.628 }, 00:48:51.628 "method": "bdev_malloc_create" 00:48:51.628 }, 00:48:51.628 { 00:48:51.628 "params": { 00:48:51.628 "block_size": 512, 00:48:51.628 "num_blocks": 1048576, 00:48:51.628 "name": "malloc1" 00:48:51.628 }, 00:48:51.628 "method": "bdev_malloc_create" 00:48:51.628 }, 00:48:51.628 { 00:48:51.628 "method": "bdev_wait_for_examine" 00:48:51.628 } 00:48:51.628 ] 00:48:51.628 } 00:48:51.628 ] 00:48:51.628 } 00:48:51.628 [2024-07-22 16:25:55.698050] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:48:51.628 [2024-07-22 16:25:55.698211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91676 ] 00:48:51.628 [2024-07-22 16:25:55.892875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:52.194 [2024-07-22 16:25:56.190832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:01.090  Copying: 145/512 [MB] (145 MBps) Copying: 293/512 [MB] (148 MBps) Copying: 440/512 [MB] (147 MBps) Copying: 512/512 [MB] (average 148 MBps) 00:49:01.090 00:49:01.090 16:26:05 -- dd/malloc.sh@33 -- # gen_conf 00:49:01.090 16:26:05 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:49:01.090 16:26:05 -- dd/common.sh@31 -- # xtrace_disable 00:49:01.090 16:26:05 -- common/autotest_common.sh@10 -- # set +x 00:49:01.090 { 00:49:01.090 "subsystems": [ 00:49:01.090 { 00:49:01.090 "subsystem": "bdev", 00:49:01.090 "config": [ 00:49:01.090 { 00:49:01.090 "params": { 00:49:01.090 "block_size": 512, 00:49:01.090 "num_blocks": 1048576, 00:49:01.090 "name": "malloc0" 00:49:01.090 }, 00:49:01.090 "method": "bdev_malloc_create" 00:49:01.090 }, 00:49:01.090 { 00:49:01.090 "params": { 00:49:01.090 "block_size": 512, 00:49:01.090 "num_blocks": 1048576, 00:49:01.090 "name": "malloc1" 00:49:01.090 }, 00:49:01.090 "method": "bdev_malloc_create" 00:49:01.090 }, 00:49:01.090 { 00:49:01.090 "method": "bdev_wait_for_examine" 00:49:01.090 } 00:49:01.091 ] 00:49:01.091 } 00:49:01.091 ] 00:49:01.091 } 00:49:01.362 [2024-07-22 16:26:05.379500] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:01.362 [2024-07-22 16:26:05.379670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91775 ] 00:49:01.362 [2024-07-22 16:26:05.558330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:01.644 [2024-07-22 16:26:05.830954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:11.647  Copying: 158/512 [MB] (158 MBps) Copying: 316/512 [MB] (157 MBps) Copying: 473/512 [MB] (156 MBps) Copying: 512/512 [MB] (average 157 MBps) 00:49:11.647 00:49:11.647 00:49:11.647 real 0m19.360s 00:49:11.647 user 0m17.384s 00:49:11.647 sys 0m1.788s 00:49:11.647 16:26:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:11.647 16:26:14 -- common/autotest_common.sh@10 -- # set +x 00:49:11.647 ************************************ 00:49:11.647 END TEST dd_malloc_copy 00:49:11.647 ************************************ 00:49:11.647 ************************************ 00:49:11.647 END TEST spdk_dd_malloc 00:49:11.647 ************************************ 00:49:11.647 00:49:11.647 real 0m19.492s 00:49:11.647 user 0m17.431s 00:49:11.647 sys 0m1.881s 00:49:11.647 16:26:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:11.647 16:26:15 -- common/autotest_common.sh@10 -- # set +x 00:49:11.647 16:26:15 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:49:11.647 16:26:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:49:11.647 16:26:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:11.647 16:26:15 -- common/autotest_common.sh@10 -- # set +x 00:49:11.647 ************************************ 00:49:11.647 START TEST spdk_dd_bdev_to_bdev 00:49:11.647 ************************************ 00:49:11.647 16:26:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:49:11.647 * Looking for test storage... 00:49:11.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:49:11.647 16:26:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:11.647 16:26:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:11.647 16:26:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:11.647 16:26:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:11.647 16:26:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:11.647 16:26:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:11.647 16:26:15 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:11.647 16:26:15 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:11.647 16:26:15 -- paths/export.sh@6 -- # export PATH 00:49:11.647 16:26:15 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:49:11.647 16:26:15 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:49:11.648 16:26:15 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:49:11.648 16:26:15 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:49:11.648 16:26:15 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:49:11.648 16:26:15 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:49:11.648 16:26:15 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:49:11.648 [2024-07-22 16:26:15.240335] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:11.648 [2024-07-22 16:26:15.240527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91936 ] 00:49:11.648 [2024-07-22 16:26:15.417779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:11.648 [2024-07-22 16:26:15.698817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:13.592  Copying: 256/256 [MB] (average 1254 MBps) 00:49:13.592 00:49:13.592 16:26:17 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:13.592 16:26:17 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:49:13.592 16:26:17 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:49:13.592 16:26:17 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:49:13.592 16:26:17 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:49:13.592 16:26:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:49:13.592 16:26:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:13.592 16:26:17 -- common/autotest_common.sh@10 -- # set +x 00:49:13.592 ************************************ 00:49:13.592 START TEST dd_inflate_file 00:49:13.592 ************************************ 00:49:13.592 16:26:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:49:13.592 [2024-07-22 16:26:17.683650] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:13.592 [2024-07-22 16:26:17.684113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91962 ] 00:49:13.592 [2024-07-22 16:26:17.864253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:14.158 [2024-07-22 16:26:18.131651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:15.790  Copying: 64/64 [MB] (average 1254 MBps) 00:49:15.790 00:49:15.790 00:49:15.790 real 0m2.223s 00:49:15.790 user 0m1.738s 00:49:15.790 sys 0m0.368s 00:49:15.790 ************************************ 00:49:15.790 END TEST dd_inflate_file 00:49:15.790 ************************************ 00:49:15.790 16:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:15.790 16:26:19 -- common/autotest_common.sh@10 -- # set +x 00:49:15.790 16:26:19 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:49:15.790 16:26:19 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:49:15.790 16:26:19 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:49:15.790 16:26:19 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:49:15.790 16:26:19 -- dd/common.sh@31 -- # xtrace_disable 00:49:15.790 16:26:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:49:15.790 16:26:19 -- common/autotest_common.sh@10 -- # set +x 00:49:15.790 16:26:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:15.790 16:26:19 -- common/autotest_common.sh@10 -- # set +x 00:49:15.790 ************************************ 00:49:15.790 START TEST dd_copy_to_out_bdev 00:49:15.790 ************************************ 00:49:15.790 16:26:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:49:15.790 { 00:49:15.790 "subsystems": [ 00:49:15.790 { 00:49:15.790 "subsystem": "bdev", 00:49:15.790 "config": [ 00:49:15.790 { 00:49:15.790 "params": { 00:49:15.790 "block_size": 4096, 00:49:15.790 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:15.790 "name": "aio1" 00:49:15.790 }, 00:49:15.790 "method": "bdev_aio_create" 00:49:15.790 }, 00:49:15.790 { 00:49:15.790 "params": { 00:49:15.790 "trtype": "pcie", 00:49:15.790 "traddr": "0000:00:06.0", 00:49:15.790 "name": "Nvme0" 00:49:15.790 }, 00:49:15.790 "method": "bdev_nvme_attach_controller" 00:49:15.790 }, 00:49:15.790 { 00:49:15.790 "method": "bdev_wait_for_examine" 00:49:15.790 } 00:49:15.790 ] 00:49:15.790 } 00:49:15.790 ] 00:49:15.790 } 00:49:15.790 [2024-07-22 16:26:19.970885] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:15.790 [2024-07-22 16:26:19.971057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92013 ] 00:49:16.048 [2024-07-22 16:26:20.141243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:16.306 [2024-07-22 16:26:20.401425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:19.318  Copying: 50/64 [MB] (50 MBps) Copying: 64/64 [MB] (average 50 MBps) 00:49:19.318 00:49:19.318 ************************************ 00:49:19.318 END TEST dd_copy_to_out_bdev 00:49:19.318 ************************************ 00:49:19.318 00:49:19.318 real 0m3.457s 00:49:19.318 user 0m2.920s 00:49:19.318 sys 0m0.410s 00:49:19.318 16:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:19.318 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:49:19.318 16:26:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:19.318 16:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:19.318 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:49:19.318 ************************************ 00:49:19.318 START TEST dd_offset_magic 00:49:19.318 ************************************ 00:49:19.318 16:26:23 -- common/autotest_common.sh@1104 -- # offset_magic 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:49:19.318 16:26:23 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:49:19.318 16:26:23 -- dd/common.sh@31 -- # xtrace_disable 00:49:19.318 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:49:19.318 { 00:49:19.318 "subsystems": [ 00:49:19.318 { 00:49:19.318 "subsystem": "bdev", 00:49:19.318 "config": [ 00:49:19.318 { 00:49:19.318 "params": { 00:49:19.318 "block_size": 4096, 00:49:19.318 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:19.318 "name": "aio1" 00:49:19.318 }, 00:49:19.318 "method": "bdev_aio_create" 00:49:19.318 }, 00:49:19.318 { 00:49:19.318 "params": { 00:49:19.318 "trtype": "pcie", 00:49:19.318 "traddr": "0000:00:06.0", 00:49:19.318 "name": "Nvme0" 00:49:19.318 }, 00:49:19.318 "method": "bdev_nvme_attach_controller" 00:49:19.318 }, 00:49:19.318 { 00:49:19.318 "method": "bdev_wait_for_examine" 00:49:19.318 } 00:49:19.318 ] 00:49:19.318 } 00:49:19.318 ] 00:49:19.318 } 00:49:19.318 [2024-07-22 16:26:23.469835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:19.318 [2024-07-22 16:26:23.470048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92070 ] 00:49:19.576 [2024-07-22 16:26:23.644189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:19.835 [2024-07-22 16:26:23.921312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:22.145  Copying: 65/65 [MB] (average 158 MBps) 00:49:22.145 00:49:22.145 16:26:26 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:49:22.145 16:26:26 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:49:22.145 16:26:26 -- dd/common.sh@31 -- # xtrace_disable 00:49:22.146 16:26:26 -- common/autotest_common.sh@10 -- # set +x 00:49:22.146 { 00:49:22.146 "subsystems": [ 00:49:22.146 { 00:49:22.146 "subsystem": "bdev", 00:49:22.146 "config": [ 00:49:22.146 { 00:49:22.146 "params": { 00:49:22.146 "block_size": 4096, 00:49:22.146 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:22.146 "name": "aio1" 00:49:22.146 }, 00:49:22.146 "method": "bdev_aio_create" 00:49:22.146 }, 00:49:22.146 { 00:49:22.146 "params": { 00:49:22.146 "trtype": "pcie", 00:49:22.146 "traddr": "0000:00:06.0", 00:49:22.146 "name": "Nvme0" 00:49:22.146 }, 00:49:22.146 "method": "bdev_nvme_attach_controller" 00:49:22.146 }, 00:49:22.146 { 00:49:22.146 "method": "bdev_wait_for_examine" 00:49:22.146 } 00:49:22.146 ] 00:49:22.146 } 00:49:22.146 ] 00:49:22.146 } 00:49:22.146 [2024-07-22 16:26:26.202570] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:22.146 [2024-07-22 16:26:26.202757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92108 ] 00:49:22.146 [2024-07-22 16:26:26.386377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:22.712 [2024-07-22 16:26:26.723626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:24.343  Copying: 1024/1024 [kB] (average 1000 MBps) 00:49:24.343 00:49:24.343 16:26:28 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:49:24.343 16:26:28 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:49:24.343 16:26:28 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:49:24.343 16:26:28 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:49:24.343 16:26:28 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:49:24.343 16:26:28 -- dd/common.sh@31 -- # xtrace_disable 00:49:24.343 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:49:24.343 { 00:49:24.343 "subsystems": [ 00:49:24.343 { 00:49:24.343 "subsystem": "bdev", 00:49:24.343 "config": [ 00:49:24.343 { 00:49:24.343 "params": { 00:49:24.343 "block_size": 4096, 00:49:24.343 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:24.343 "name": "aio1" 00:49:24.343 }, 00:49:24.343 "method": "bdev_aio_create" 00:49:24.343 }, 00:49:24.344 { 00:49:24.344 "params": { 00:49:24.344 "trtype": "pcie", 00:49:24.344 "traddr": "0000:00:06.0", 00:49:24.344 "name": "Nvme0" 00:49:24.344 }, 00:49:24.344 "method": "bdev_nvme_attach_controller" 00:49:24.344 }, 00:49:24.344 { 00:49:24.344 "method": "bdev_wait_for_examine" 00:49:24.344 } 00:49:24.344 ] 00:49:24.344 } 00:49:24.344 ] 00:49:24.344 } 00:49:24.344 [2024-07-22 16:26:28.491933] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:24.344 [2024-07-22 16:26:28.492106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92140 ] 00:49:24.601 [2024-07-22 16:26:28.662328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:24.859 [2024-07-22 16:26:28.923955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:26.797  Copying: 65/65 [MB] (average 200 MBps) 00:49:26.797 00:49:26.797 16:26:31 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:49:26.797 16:26:31 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:49:26.797 16:26:31 -- dd/common.sh@31 -- # xtrace_disable 00:49:26.797 16:26:31 -- common/autotest_common.sh@10 -- # set +x 00:49:26.797 { 00:49:26.797 "subsystems": [ 00:49:26.797 { 00:49:26.797 "subsystem": "bdev", 00:49:26.797 "config": [ 00:49:26.797 { 00:49:26.797 "params": { 00:49:26.797 "block_size": 4096, 00:49:26.797 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:26.797 "name": "aio1" 00:49:26.797 }, 00:49:26.797 "method": "bdev_aio_create" 00:49:26.797 }, 00:49:26.797 { 00:49:26.797 "params": { 00:49:26.797 "trtype": "pcie", 00:49:26.797 "traddr": "0000:00:06.0", 00:49:26.797 "name": "Nvme0" 00:49:26.797 }, 00:49:26.797 "method": "bdev_nvme_attach_controller" 00:49:26.797 }, 00:49:26.797 { 00:49:26.797 "method": "bdev_wait_for_examine" 00:49:26.797 } 00:49:26.797 ] 00:49:26.797 } 00:49:26.797 ] 00:49:26.797 } 00:49:27.056 [2024-07-22 16:26:31.082008] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:27.056 [2024-07-22 16:26:31.082174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92177 ] 00:49:27.056 [2024-07-22 16:26:31.249782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:27.314 [2024-07-22 16:26:31.511100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:29.255  Copying: 1024/1024 [kB] (average 1000 MBps) 00:49:29.255 00:49:29.255 16:26:33 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:49:29.255 16:26:33 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:49:29.255 00:49:29.255 real 0m9.798s 00:49:29.255 user 0m7.214s 00:49:29.255 sys 0m1.529s 00:49:29.255 16:26:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:29.255 16:26:33 -- common/autotest_common.sh@10 -- # set +x 00:49:29.255 ************************************ 00:49:29.255 END TEST dd_offset_magic 00:49:29.255 ************************************ 00:49:29.255 16:26:33 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:49:29.255 16:26:33 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:49:29.255 16:26:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:49:29.255 16:26:33 -- dd/common.sh@11 -- # local nvme_ref= 00:49:29.255 16:26:33 -- dd/common.sh@12 -- # local size=4194330 00:49:29.255 16:26:33 -- dd/common.sh@14 -- # local bs=1048576 00:49:29.255 16:26:33 -- dd/common.sh@15 -- # local count=5 00:49:29.255 16:26:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:49:29.255 16:26:33 -- dd/common.sh@18 -- # gen_conf 00:49:29.255 16:26:33 -- dd/common.sh@31 -- # xtrace_disable 00:49:29.255 16:26:33 -- common/autotest_common.sh@10 -- # set +x 00:49:29.255 { 00:49:29.255 "subsystems": [ 00:49:29.255 { 00:49:29.255 "subsystem": "bdev", 00:49:29.255 "config": [ 00:49:29.255 { 00:49:29.255 "params": { 00:49:29.255 "block_size": 4096, 00:49:29.255 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:29.255 "name": "aio1" 00:49:29.255 }, 00:49:29.255 "method": "bdev_aio_create" 00:49:29.255 }, 00:49:29.255 { 00:49:29.255 "params": { 00:49:29.255 "trtype": "pcie", 00:49:29.255 "traddr": "0000:00:06.0", 00:49:29.255 "name": "Nvme0" 00:49:29.255 }, 00:49:29.255 "method": "bdev_nvme_attach_controller" 00:49:29.255 }, 00:49:29.255 { 00:49:29.255 "method": "bdev_wait_for_examine" 00:49:29.255 } 00:49:29.255 ] 00:49:29.255 } 00:49:29.255 ] 00:49:29.255 } 00:49:29.255 [2024-07-22 16:26:33.302859] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:29.255 [2024-07-22 16:26:33.303038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92220 ] 00:49:29.255 [2024-07-22 16:26:33.469593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:29.512 [2024-07-22 16:26:33.734369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:31.450  Copying: 5120/5120 [kB] (average 1666 MBps) 00:49:31.450 00:49:31.450 16:26:35 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:49:31.450 16:26:35 -- dd/common.sh@10 -- # local bdev=aio1 00:49:31.450 16:26:35 -- dd/common.sh@11 -- # local nvme_ref= 00:49:31.450 16:26:35 -- dd/common.sh@12 -- # local size=4194330 00:49:31.450 16:26:35 -- dd/common.sh@14 -- # local bs=1048576 00:49:31.450 16:26:35 -- dd/common.sh@15 -- # local count=5 00:49:31.450 16:26:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:49:31.450 16:26:35 -- dd/common.sh@18 -- # gen_conf 00:49:31.450 16:26:35 -- dd/common.sh@31 -- # xtrace_disable 00:49:31.450 16:26:35 -- common/autotest_common.sh@10 -- # set +x 00:49:31.450 { 00:49:31.450 "subsystems": [ 00:49:31.450 { 00:49:31.450 "subsystem": "bdev", 00:49:31.450 "config": [ 00:49:31.450 { 00:49:31.450 "params": { 00:49:31.450 "block_size": 4096, 00:49:31.450 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:49:31.450 "name": "aio1" 00:49:31.450 }, 00:49:31.450 "method": "bdev_aio_create" 00:49:31.450 }, 00:49:31.450 { 00:49:31.450 "params": { 00:49:31.450 "trtype": "pcie", 00:49:31.450 "traddr": "0000:00:06.0", 00:49:31.450 "name": "Nvme0" 00:49:31.450 }, 00:49:31.450 "method": "bdev_nvme_attach_controller" 00:49:31.450 }, 00:49:31.450 { 00:49:31.450 "method": "bdev_wait_for_examine" 00:49:31.450 } 00:49:31.450 ] 00:49:31.450 } 00:49:31.450 ] 00:49:31.450 } 00:49:31.450 [2024-07-22 16:26:35.617975] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:31.450 [2024-07-22 16:26:35.618159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92252 ] 00:49:31.709 [2024-07-22 16:26:35.786067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:31.967 [2024-07-22 16:26:36.049249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:33.502  Copying: 5120/5120 [kB] (average 263 MBps) 00:49:33.502 00:49:33.502 16:26:37 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:49:33.761 00:49:33.761 real 0m22.728s 00:49:33.761 user 0m17.294s 00:49:33.761 sys 0m3.762s 00:49:33.761 16:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:33.761 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:49:33.761 ************************************ 00:49:33.761 END TEST spdk_dd_bdev_to_bdev 00:49:33.761 ************************************ 00:49:33.761 16:26:37 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:49:33.761 16:26:37 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:49:33.761 16:26:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:33.761 16:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:33.761 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:49:33.761 ************************************ 00:49:33.761 START TEST spdk_dd_sparse 00:49:33.761 ************************************ 00:49:33.761 16:26:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:49:33.761 * Looking for test storage... 00:49:33.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:49:33.761 16:26:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:33.761 16:26:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:33.761 16:26:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:33.761 16:26:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:33.761 16:26:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:33.761 16:26:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:33.761 16:26:37 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:33.761 16:26:37 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:33.761 16:26:37 -- paths/export.sh@6 -- # export PATH 00:49:33.761 16:26:37 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:33.761 16:26:37 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:49:33.761 16:26:37 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:49:33.761 16:26:37 -- dd/sparse.sh@110 -- # file1=file_zero1 00:49:33.761 16:26:37 -- dd/sparse.sh@111 -- # file2=file_zero2 00:49:33.761 16:26:37 -- dd/sparse.sh@112 -- # file3=file_zero3 00:49:33.761 16:26:37 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:49:33.761 16:26:37 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:49:33.761 16:26:37 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:49:33.761 16:26:37 -- dd/sparse.sh@118 -- # prepare 00:49:33.761 16:26:37 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:49:33.761 16:26:37 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:49:33.761 1+0 records in 00:49:33.761 1+0 records out 00:49:33.761 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00964006 s, 435 MB/s 00:49:33.761 16:26:37 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:49:33.761 1+0 records in 00:49:33.761 1+0 records out 00:49:33.761 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00817079 s, 513 MB/s 00:49:33.762 16:26:37 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:49:33.762 1+0 records in 00:49:33.762 1+0 records out 00:49:33.762 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00862937 s, 486 MB/s 00:49:33.762 16:26:37 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:49:33.762 16:26:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:33.762 16:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:33.762 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:49:33.762 ************************************ 00:49:33.762 START TEST dd_sparse_file_to_file 00:49:33.762 ************************************ 00:49:33.762 16:26:38 -- common/autotest_common.sh@1104 -- # file_to_file 00:49:33.762 16:26:38 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:49:33.762 16:26:38 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:49:33.762 16:26:38 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:49:33.762 16:26:38 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:49:33.762 16:26:38 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:49:33.762 16:26:38 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:49:33.762 16:26:38 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:49:33.762 16:26:38 -- dd/sparse.sh@41 -- # gen_conf 00:49:33.762 16:26:38 -- dd/common.sh@31 -- # xtrace_disable 00:49:33.762 16:26:38 -- common/autotest_common.sh@10 -- # set +x 00:49:33.762 { 00:49:33.762 "subsystems": [ 00:49:33.762 { 00:49:33.762 "subsystem": "bdev", 00:49:33.762 "config": [ 00:49:33.762 { 00:49:33.762 "params": { 00:49:33.762 "block_size": 4096, 00:49:33.762 "filename": "dd_sparse_aio_disk", 00:49:33.762 "name": "dd_aio" 00:49:33.762 }, 00:49:33.762 "method": "bdev_aio_create" 00:49:33.762 }, 00:49:33.762 { 00:49:33.762 "params": { 00:49:33.762 "lvs_name": "dd_lvstore", 00:49:33.762 "bdev_name": "dd_aio" 00:49:33.762 }, 00:49:33.762 "method": "bdev_lvol_create_lvstore" 00:49:33.762 }, 00:49:33.762 { 00:49:33.762 "method": "bdev_wait_for_examine" 00:49:33.762 } 00:49:33.762 ] 00:49:33.762 } 00:49:33.762 ] 00:49:33.762 } 00:49:34.019 [2024-07-22 16:26:38.087043] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:34.019 [2024-07-22 16:26:38.087285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92331 ] 00:49:34.019 [2024-07-22 16:26:38.267392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:34.587 [2024-07-22 16:26:38.620547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:36.247  Copying: 12/36 [MB] (average 923 MBps) 00:49:36.247 00:49:36.247 16:26:40 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:49:36.247 16:26:40 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:49:36.247 16:26:40 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:49:36.247 16:26:40 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:49:36.247 16:26:40 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:49:36.247 16:26:40 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:49:36.247 16:26:40 -- dd/sparse.sh@52 -- # stat1_b=24576 00:49:36.247 16:26:40 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:49:36.247 16:26:40 -- dd/sparse.sh@53 -- # stat2_b=24576 00:49:36.247 16:26:40 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:49:36.247 00:49:36.247 real 0m2.474s 00:49:36.247 user 0m1.942s 00:49:36.247 sys 0m0.409s 00:49:36.247 16:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:36.248 16:26:40 -- common/autotest_common.sh@10 -- # set +x 00:49:36.248 ************************************ 00:49:36.248 END TEST dd_sparse_file_to_file 00:49:36.248 ************************************ 00:49:36.248 16:26:40 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:49:36.248 16:26:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:36.506 16:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:36.506 16:26:40 -- common/autotest_common.sh@10 -- # set +x 00:49:36.506 ************************************ 00:49:36.506 START TEST dd_sparse_file_to_bdev 00:49:36.506 ************************************ 00:49:36.506 16:26:40 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:49:36.506 16:26:40 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:49:36.506 16:26:40 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:49:36.506 16:26:40 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:49:36.506 16:26:40 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:49:36.506 16:26:40 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:49:36.506 16:26:40 -- dd/sparse.sh@73 -- # gen_conf 00:49:36.506 16:26:40 -- dd/common.sh@31 -- # xtrace_disable 00:49:36.506 16:26:40 -- common/autotest_common.sh@10 -- # set +x 00:49:36.506 { 00:49:36.506 "subsystems": [ 00:49:36.506 { 00:49:36.506 "subsystem": "bdev", 00:49:36.506 "config": [ 00:49:36.506 { 00:49:36.506 "params": { 00:49:36.506 "block_size": 4096, 00:49:36.506 "filename": "dd_sparse_aio_disk", 00:49:36.506 "name": "dd_aio" 00:49:36.506 }, 00:49:36.506 "method": "bdev_aio_create" 00:49:36.506 }, 00:49:36.506 { 00:49:36.506 "params": { 00:49:36.506 "lvs_name": "dd_lvstore", 00:49:36.506 "lvol_name": "dd_lvol", 00:49:36.506 "size": 37748736, 00:49:36.506 "thin_provision": true 00:49:36.506 }, 00:49:36.506 "method": "bdev_lvol_create" 00:49:36.507 }, 00:49:36.507 { 00:49:36.507 "method": "bdev_wait_for_examine" 00:49:36.507 } 00:49:36.507 ] 00:49:36.507 } 00:49:36.507 ] 00:49:36.507 } 00:49:36.507 [2024-07-22 16:26:40.610851] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:36.507 [2024-07-22 16:26:40.611029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92393 ] 00:49:36.766 [2024-07-22 16:26:40.807490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:37.024 [2024-07-22 16:26:41.126451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:37.283 [2024-07-22 16:26:41.518132] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:49:37.541  Copying: 12/36 [MB] (average 521 MBps)[2024-07-22 16:26:41.583954] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:49:38.939 00:49:38.939 00:49:38.939 00:49:38.939 real 0m2.526s 00:49:38.939 user 0m2.005s 00:49:38.939 sys 0m0.413s 00:49:38.939 16:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:38.939 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:49:38.939 ************************************ 00:49:38.939 END TEST dd_sparse_file_to_bdev 00:49:38.939 ************************************ 00:49:38.939 16:26:43 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:49:38.939 16:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:38.939 16:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:38.939 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:49:38.939 ************************************ 00:49:38.939 START TEST dd_sparse_bdev_to_file 00:49:38.939 ************************************ 00:49:38.939 16:26:43 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:49:38.939 16:26:43 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:49:38.939 16:26:43 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:49:38.939 16:26:43 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:49:38.939 16:26:43 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:49:38.939 16:26:43 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:49:38.939 16:26:43 -- dd/sparse.sh@91 -- # gen_conf 00:49:38.939 16:26:43 -- dd/common.sh@31 -- # xtrace_disable 00:49:38.939 16:26:43 -- common/autotest_common.sh@10 -- # set +x 00:49:38.939 { 00:49:38.939 "subsystems": [ 00:49:38.939 { 00:49:38.939 "subsystem": "bdev", 00:49:38.939 "config": [ 00:49:38.939 { 00:49:38.939 "params": { 00:49:38.939 "block_size": 4096, 00:49:38.939 "filename": "dd_sparse_aio_disk", 00:49:38.939 "name": "dd_aio" 00:49:38.939 }, 00:49:38.939 "method": "bdev_aio_create" 00:49:38.939 }, 00:49:38.939 { 00:49:38.939 "method": "bdev_wait_for_examine" 00:49:38.939 } 00:49:38.939 ] 00:49:38.939 } 00:49:38.939 ] 00:49:38.939 } 00:49:38.939 [2024-07-22 16:26:43.206385] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:38.939 [2024-07-22 16:26:43.206666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92442 ] 00:49:39.197 [2024-07-22 16:26:43.396558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:39.454 [2024-07-22 16:26:43.667236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:41.394  Copying: 12/36 [MB] (average 1000 MBps) 00:49:41.394 00:49:41.394 16:26:45 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:49:41.394 16:26:45 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:49:41.394 16:26:45 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:49:41.394 16:26:45 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:49:41.394 16:26:45 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:49:41.394 16:26:45 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:49:41.394 16:26:45 -- dd/sparse.sh@102 -- # stat2_b=24576 00:49:41.394 16:26:45 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:49:41.394 16:26:45 -- dd/sparse.sh@103 -- # stat3_b=24576 00:49:41.394 16:26:45 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:49:41.394 00:49:41.394 real 0m2.384s 00:49:41.394 user 0m1.872s 00:49:41.394 sys 0m0.399s 00:49:41.394 16:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:41.394 ************************************ 00:49:41.394 END TEST dd_sparse_bdev_to_file 00:49:41.394 ************************************ 00:49:41.394 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.394 16:26:45 -- dd/sparse.sh@1 -- # cleanup 00:49:41.394 16:26:45 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:49:41.394 16:26:45 -- dd/sparse.sh@12 -- # rm file_zero1 00:49:41.394 16:26:45 -- dd/sparse.sh@13 -- # rm file_zero2 00:49:41.394 16:26:45 -- dd/sparse.sh@14 -- # rm file_zero3 00:49:41.394 00:49:41.394 real 0m7.700s 00:49:41.394 user 0m5.901s 00:49:41.394 sys 0m1.452s 00:49:41.394 16:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:41.394 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.394 ************************************ 00:49:41.394 END TEST spdk_dd_sparse 00:49:41.394 ************************************ 00:49:41.394 16:26:45 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:49:41.394 16:26:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:41.394 16:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:41.394 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.394 ************************************ 00:49:41.394 START TEST spdk_dd_negative 00:49:41.394 ************************************ 00:49:41.394 16:26:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:49:41.652 * Looking for test storage... 00:49:41.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:49:41.652 16:26:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:41.652 16:26:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:41.652 16:26:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:41.652 16:26:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:41.652 16:26:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:41.652 16:26:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:41.652 16:26:45 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:41.652 16:26:45 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:41.652 16:26:45 -- paths/export.sh@6 -- # export PATH 00:49:41.652 16:26:45 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:41.652 16:26:45 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:41.652 16:26:45 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:49:41.652 16:26:45 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:41.652 16:26:45 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:49:41.652 16:26:45 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:49:41.652 16:26:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:41.652 16:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:41.652 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.652 ************************************ 00:49:41.652 START TEST dd_invalid_arguments 00:49:41.652 ************************************ 00:49:41.652 16:26:45 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:49:41.652 16:26:45 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:49:41.652 16:26:45 -- common/autotest_common.sh@640 -- # local es=0 00:49:41.652 16:26:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:49:41.652 16:26:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.652 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.652 16:26:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.652 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.652 16:26:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.652 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.653 16:26:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.653 16:26:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:41.653 16:26:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:49:41.653 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:49:41.653 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:49:41.653 options: 00:49:41.653 -c, --config JSON config file (default none) 00:49:41.653 --json JSON config file (default none) 00:49:41.653 --json-ignore-init-errors 00:49:41.653 don't exit on invalid config entry 00:49:41.653 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:49:41.653 -g, --single-file-segments 00:49:41.653 force creating just one hugetlbfs file 00:49:41.653 -h, --help show this usage 00:49:41.653 -i, --shm-id shared memory ID (optional) 00:49:41.653 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:49:41.653 --lcores lcore to CPU mapping list. The list is in the format: 00:49:41.653 [<,lcores[@CPUs]>...] 00:49:41.653 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:49:41.653 Within the group, '-' is used for range separator, 00:49:41.653 ',' is used for single number separator. 00:49:41.653 '( )' can be omitted for single element group, 00:49:41.653 '@' can be omitted if cpus and lcores have the same value 00:49:41.653 -n, --mem-channels channel number of memory channels used for DPDK 00:49:41.653 -p, --main-core main (primary) core for DPDK 00:49:41.653 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:49:41.653 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:49:41.653 --disable-cpumask-locks Disable CPU core lock files. 00:49:41.653 --silence-noticelog disable notice level logging to stderr 00:49:41.653 --msg-mempool-size global message memory pool size in count (default: 262143) 00:49:41.653 -u, --no-pci disable PCI access 00:49:41.653 --wait-for-rpc wait for RPCs to initialize subsystems 00:49:41.653 --max-delay maximum reactor delay (in microseconds) 00:49:41.653 -B, --pci-blocked pci addr to block (can be used more than once) 00:49:41.653 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:49:41.653 -R, --huge-unlink unlink huge files after initialization 00:49:41.653 -v, --version print SPDK version 00:49:41.653 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:49:41.653 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:49:41.653 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:49:41.653 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:49:41.653 Tracepoints vary in size and can use more than one trace entry. 00:49:41.653 --rpcs-allowed comma-separated list of permitted RPCS 00:49:41.653 --env-context Opaque context for use of the env implementation 00:49:41.653 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:49:41.653 --no-huge run without using hugepages 00:49:41.653 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:49:41.653 -e, --tpoint-group [:] 00:49:41.653 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:49:41.653 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:49:41.653 Groups and [2024-07-22 16:26:45.784932] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:49:41.653 masks can be combined (e.g. thread,bdev:0x1). 00:49:41.653 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:49:41.653 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:49:41.653 [--------- DD Options ---------] 00:49:41.653 --if Input file. Must specify either --if or --ib. 00:49:41.653 --ib Input bdev. Must specifier either --if or --ib 00:49:41.653 --of Output file. Must specify either --of or --ob. 00:49:41.653 --ob Output bdev. Must specify either --of or --ob. 00:49:41.653 --iflag Input file flags. 00:49:41.653 --oflag Output file flags. 00:49:41.653 --bs I/O unit size (default: 4096) 00:49:41.653 --qd Queue depth (default: 2) 00:49:41.653 --count I/O unit count. The number of I/O units to copy. (default: all) 00:49:41.653 --skip Skip this many I/O units at start of input. (default: 0) 00:49:41.653 --seek Skip this many I/O units at start of output. (default: 0) 00:49:41.653 --aio Force usage of AIO. (by default io_uring is used if available) 00:49:41.653 --sparse Enable hole skipping in input target 00:49:41.653 Available iflag and oflag values: 00:49:41.653 append - append mode 00:49:41.653 direct - use direct I/O for data 00:49:41.653 directory - fail unless a directory 00:49:41.653 dsync - use synchronized I/O for data 00:49:41.653 noatime - do not update access time 00:49:41.653 noctty - do not assign controlling terminal from file 00:49:41.653 nofollow - do not follow symlinks 00:49:41.653 nonblock - use non-blocking I/O 00:49:41.653 sync - use synchronized I/O for data and metadata 00:49:41.653 ************************************ 00:49:41.653 END TEST dd_invalid_arguments 00:49:41.653 ************************************ 00:49:41.653 16:26:45 -- common/autotest_common.sh@643 -- # es=2 00:49:41.653 16:26:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:41.653 16:26:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:41.653 16:26:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:41.653 00:49:41.653 real 0m0.107s 00:49:41.653 user 0m0.056s 00:49:41.653 sys 0m0.051s 00:49:41.653 16:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:41.653 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.653 16:26:45 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:49:41.653 16:26:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:41.653 16:26:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:41.653 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.653 ************************************ 00:49:41.653 START TEST dd_double_input 00:49:41.653 ************************************ 00:49:41.653 16:26:45 -- common/autotest_common.sh@1104 -- # double_input 00:49:41.653 16:26:45 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:49:41.653 16:26:45 -- common/autotest_common.sh@640 -- # local es=0 00:49:41.653 16:26:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:49:41.653 16:26:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.653 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.653 16:26:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.653 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.653 16:26:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.653 16:26:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.653 16:26:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.653 16:26:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:41.653 16:26:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:49:41.911 [2024-07-22 16:26:45.948695] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:49:41.911 16:26:45 -- common/autotest_common.sh@643 -- # es=22 00:49:41.911 16:26:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:41.911 16:26:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:41.911 16:26:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:41.911 00:49:41.911 real 0m0.108s 00:49:41.911 user 0m0.046s 00:49:41.911 sys 0m0.062s 00:49:41.911 16:26:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:41.911 ************************************ 00:49:41.911 END TEST dd_double_input 00:49:41.911 ************************************ 00:49:41.911 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:49:41.911 16:26:46 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:49:41.911 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:41.911 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:41.911 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:41.911 ************************************ 00:49:41.911 START TEST dd_double_output 00:49:41.911 ************************************ 00:49:41.911 16:26:46 -- common/autotest_common.sh@1104 -- # double_output 00:49:41.911 16:26:46 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:49:41.911 16:26:46 -- common/autotest_common.sh@640 -- # local es=0 00:49:41.911 16:26:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:49:41.911 16:26:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.911 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.911 16:26:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.911 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.911 16:26:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.911 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:41.911 16:26:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:41.911 16:26:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:41.911 16:26:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:49:41.911 [2024-07-22 16:26:46.106718] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:49:41.911 16:26:46 -- common/autotest_common.sh@643 -- # es=22 00:49:41.911 16:26:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:41.911 16:26:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:41.911 16:26:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:41.911 00:49:41.911 real 0m0.122s 00:49:41.911 user 0m0.061s 00:49:41.911 sys 0m0.062s 00:49:41.911 16:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:41.911 ************************************ 00:49:41.911 END TEST dd_double_output 00:49:41.911 ************************************ 00:49:41.911 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.169 16:26:46 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:49:42.169 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:42.169 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:42.169 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.169 ************************************ 00:49:42.169 START TEST dd_no_input 00:49:42.169 ************************************ 00:49:42.169 16:26:46 -- common/autotest_common.sh@1104 -- # no_input 00:49:42.169 16:26:46 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:49:42.169 16:26:46 -- common/autotest_common.sh@640 -- # local es=0 00:49:42.169 16:26:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:49:42.169 16:26:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:42.169 16:26:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:49:42.169 [2024-07-22 16:26:46.301542] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:49:42.169 16:26:46 -- common/autotest_common.sh@643 -- # es=22 00:49:42.169 16:26:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:42.169 16:26:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:42.169 16:26:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:42.169 00:49:42.169 real 0m0.140s 00:49:42.169 user 0m0.068s 00:49:42.169 sys 0m0.072s 00:49:42.169 16:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:42.169 ************************************ 00:49:42.169 END TEST dd_no_input 00:49:42.169 ************************************ 00:49:42.169 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.169 16:26:46 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:49:42.169 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:42.169 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:42.169 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.169 ************************************ 00:49:42.169 START TEST dd_no_output 00:49:42.169 ************************************ 00:49:42.169 16:26:46 -- common/autotest_common.sh@1104 -- # no_output 00:49:42.169 16:26:46 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:42.169 16:26:46 -- common/autotest_common.sh@640 -- # local es=0 00:49:42.169 16:26:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:42.169 16:26:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.169 16:26:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:42.169 16:26:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:49:42.427 [2024-07-22 16:26:46.471352] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:49:42.427 16:26:46 -- common/autotest_common.sh@643 -- # es=22 00:49:42.427 16:26:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:42.427 16:26:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:42.427 16:26:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:42.427 00:49:42.427 real 0m0.117s 00:49:42.427 user 0m0.062s 00:49:42.427 sys 0m0.056s 00:49:42.427 16:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:42.427 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.427 ************************************ 00:49:42.427 END TEST dd_no_output 00:49:42.427 ************************************ 00:49:42.427 16:26:46 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:49:42.427 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:42.427 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:42.427 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.427 ************************************ 00:49:42.427 START TEST dd_wrong_blocksize 00:49:42.427 ************************************ 00:49:42.427 16:26:46 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:49:42.427 16:26:46 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:49:42.427 16:26:46 -- common/autotest_common.sh@640 -- # local es=0 00:49:42.427 16:26:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:49:42.427 16:26:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.427 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.427 16:26:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.427 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.427 16:26:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.427 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.427 16:26:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.427 16:26:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:42.427 16:26:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:49:42.427 [2024-07-22 16:26:46.637206] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:49:42.427 16:26:46 -- common/autotest_common.sh@643 -- # es=22 00:49:42.427 16:26:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:42.427 16:26:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:42.427 16:26:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:42.427 00:49:42.427 real 0m0.108s 00:49:42.427 user 0m0.067s 00:49:42.427 sys 0m0.042s 00:49:42.427 16:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:42.427 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.427 ************************************ 00:49:42.427 END TEST dd_wrong_blocksize 00:49:42.427 ************************************ 00:49:42.689 16:26:46 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:49:42.689 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:42.689 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:42.689 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:49:42.689 ************************************ 00:49:42.689 START TEST dd_smaller_blocksize 00:49:42.689 ************************************ 00:49:42.689 16:26:46 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:49:42.689 16:26:46 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:49:42.689 16:26:46 -- common/autotest_common.sh@640 -- # local es=0 00:49:42.689 16:26:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:49:42.689 16:26:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.689 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.689 16:26:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.689 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.689 16:26:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.689 16:26:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:42.689 16:26:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:42.689 16:26:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:42.689 16:26:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:49:42.689 [2024-07-22 16:26:46.801176] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:42.689 [2024-07-22 16:26:46.801380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92683 ] 00:49:42.947 [2024-07-22 16:26:46.978952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:43.205 [2024-07-22 16:26:47.289185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:43.771 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:49:43.771 [2024-07-22 16:26:47.926383] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:49:43.771 [2024-07-22 16:26:47.926512] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:49:44.704 [2024-07-22 16:26:48.695756] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:49:44.962 16:26:49 -- common/autotest_common.sh@643 -- # es=244 00:49:44.962 16:26:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:44.962 16:26:49 -- common/autotest_common.sh@652 -- # es=116 00:49:44.962 16:26:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:49:44.962 16:26:49 -- common/autotest_common.sh@660 -- # es=1 00:49:44.962 16:26:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:44.962 00:49:44.962 real 0m2.410s 00:49:44.962 user 0m1.743s 00:49:44.962 sys 0m0.566s 00:49:44.962 16:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:44.962 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:44.962 ************************************ 00:49:44.962 END TEST dd_smaller_blocksize 00:49:44.962 ************************************ 00:49:44.962 16:26:49 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:49:44.962 16:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:44.962 16:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:44.962 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:44.962 ************************************ 00:49:44.962 START TEST dd_invalid_count 00:49:44.962 ************************************ 00:49:44.962 16:26:49 -- common/autotest_common.sh@1104 -- # invalid_count 00:49:44.962 16:26:49 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:49:44.962 16:26:49 -- common/autotest_common.sh@640 -- # local es=0 00:49:44.962 16:26:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:49:44.962 16:26:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:44.962 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:44.962 16:26:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:44.962 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:44.962 16:26:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:44.962 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:44.962 16:26:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:44.962 16:26:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:44.962 16:26:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:49:45.220 [2024-07-22 16:26:49.285190] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:49:45.220 16:26:49 -- common/autotest_common.sh@643 -- # es=22 00:49:45.220 16:26:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:45.220 16:26:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:45.220 16:26:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:45.220 00:49:45.220 real 0m0.134s 00:49:45.220 user 0m0.063s 00:49:45.220 sys 0m0.071s 00:49:45.220 16:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:45.220 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.220 ************************************ 00:49:45.220 END TEST dd_invalid_count 00:49:45.220 ************************************ 00:49:45.220 16:26:49 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:49:45.220 16:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:45.220 16:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:45.220 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.220 ************************************ 00:49:45.220 START TEST dd_invalid_oflag 00:49:45.220 ************************************ 00:49:45.220 16:26:49 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:49:45.220 16:26:49 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:49:45.220 16:26:49 -- common/autotest_common.sh@640 -- # local es=0 00:49:45.220 16:26:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:49:45.220 16:26:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.220 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.220 16:26:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.220 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.220 16:26:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.220 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.220 16:26:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.220 16:26:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:45.220 16:26:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:49:45.220 [2024-07-22 16:26:49.448026] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:49:45.220 16:26:49 -- common/autotest_common.sh@643 -- # es=22 00:49:45.220 16:26:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:45.220 16:26:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:45.220 16:26:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:45.478 00:49:45.478 real 0m0.105s 00:49:45.478 user 0m0.049s 00:49:45.478 sys 0m0.057s 00:49:45.478 16:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:45.478 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.478 ************************************ 00:49:45.478 END TEST dd_invalid_oflag 00:49:45.478 ************************************ 00:49:45.478 16:26:49 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:49:45.478 16:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:45.478 16:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:45.478 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.478 ************************************ 00:49:45.478 START TEST dd_invalid_iflag 00:49:45.478 ************************************ 00:49:45.478 16:26:49 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:49:45.478 16:26:49 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:49:45.478 16:26:49 -- common/autotest_common.sh@640 -- # local es=0 00:49:45.478 16:26:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:49:45.478 16:26:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:45.478 16:26:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:49:45.478 [2024-07-22 16:26:49.600521] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:49:45.478 16:26:49 -- common/autotest_common.sh@643 -- # es=22 00:49:45.478 16:26:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:45.478 16:26:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:49:45.478 16:26:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:45.478 00:49:45.478 real 0m0.100s 00:49:45.478 user 0m0.058s 00:49:45.478 sys 0m0.043s 00:49:45.478 16:26:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:45.478 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.478 ************************************ 00:49:45.478 END TEST dd_invalid_iflag 00:49:45.478 ************************************ 00:49:45.478 16:26:49 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:49:45.478 16:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:45.478 16:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:45.478 16:26:49 -- common/autotest_common.sh@10 -- # set +x 00:49:45.478 ************************************ 00:49:45.478 START TEST dd_unknown_flag 00:49:45.478 ************************************ 00:49:45.478 16:26:49 -- common/autotest_common.sh@1104 -- # unknown_flag 00:49:45.478 16:26:49 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:49:45.478 16:26:49 -- common/autotest_common.sh@640 -- # local es=0 00:49:45.478 16:26:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:49:45.478 16:26:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:45.478 16:26:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:45.478 16:26:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:49:45.736 [2024-07-22 16:26:49.758545] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:45.736 [2024-07-22 16:26:49.758716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92796 ] 00:49:45.736 [2024-07-22 16:26:49.919135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:45.993 [2024-07-22 16:26:50.196141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:46.558 [2024-07-22 16:26:50.550036] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:49:46.558 [2024-07-22 16:26:50.550137] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:49:46.558 [2024-07-22 16:26:50.550158] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:49:46.558 [2024-07-22 16:26:50.550183] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:49:47.123 [2024-07-22 16:26:51.370694] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:49:47.690 16:26:51 -- common/autotest_common.sh@643 -- # es=236 00:49:47.690 16:26:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:47.690 16:26:51 -- common/autotest_common.sh@652 -- # es=108 00:49:47.690 16:26:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:49:47.690 16:26:51 -- common/autotest_common.sh@660 -- # es=1 00:49:47.690 16:26:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:47.690 00:49:47.690 real 0m2.149s 00:49:47.690 user 0m1.714s 00:49:47.690 sys 0m0.335s 00:49:47.690 ************************************ 00:49:47.690 END TEST dd_unknown_flag 00:49:47.690 ************************************ 00:49:47.690 16:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:47.690 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:49:47.690 16:26:51 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:49:47.690 16:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:49:47.690 16:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:47.690 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:49:47.690 ************************************ 00:49:47.690 START TEST dd_invalid_json 00:49:47.690 ************************************ 00:49:47.690 16:26:51 -- common/autotest_common.sh@1104 -- # invalid_json 00:49:47.690 16:26:51 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:49:47.690 16:26:51 -- common/autotest_common.sh@640 -- # local es=0 00:49:47.690 16:26:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:49:47.690 16:26:51 -- dd/negative_dd.sh@95 -- # : 00:49:47.690 16:26:51 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:47.690 16:26:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:47.690 16:26:51 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:47.690 16:26:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:47.690 16:26:51 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:47.690 16:26:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:49:47.690 16:26:51 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:49:47.690 16:26:51 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:49:47.690 16:26:51 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:49:47.948 [2024-07-22 16:26:51.969641] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:47.948 [2024-07-22 16:26:51.969822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92841 ] 00:49:47.948 [2024-07-22 16:26:52.142924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:48.206 [2024-07-22 16:26:52.410444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:48.206 [2024-07-22 16:26:52.410678] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:49:48.206 [2024-07-22 16:26:52.410715] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:49:48.206 [2024-07-22 16:26:52.410783] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:49:48.771 16:26:52 -- common/autotest_common.sh@643 -- # es=234 00:49:48.771 16:26:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:49:48.771 16:26:52 -- common/autotest_common.sh@652 -- # es=106 00:49:48.771 ************************************ 00:49:48.771 END TEST dd_invalid_json 00:49:48.771 ************************************ 00:49:48.771 16:26:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:49:48.771 16:26:52 -- common/autotest_common.sh@660 -- # es=1 00:49:48.771 16:26:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:49:48.771 00:49:48.771 real 0m0.993s 00:49:48.771 user 0m0.721s 00:49:48.771 sys 0m0.173s 00:49:48.771 16:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:48.771 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:49:48.771 ************************************ 00:49:48.771 END TEST spdk_dd_negative 00:49:48.771 ************************************ 00:49:48.771 00:49:48.771 real 0m7.317s 00:49:48.771 user 0m4.913s 00:49:48.771 sys 0m2.088s 00:49:48.771 16:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:48.771 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:49:48.771 ************************************ 00:49:48.771 END TEST spdk_dd 00:49:48.771 ************************************ 00:49:48.771 00:49:48.771 real 3m1.602s 00:49:48.771 user 2m21.984s 00:49:48.771 sys 0m29.399s 00:49:48.771 16:26:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:48.771 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:49:48.771 16:26:53 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:49:48.771 16:26:53 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:49:48.771 16:26:53 -- spdk/autotest.sh@268 -- # timing_exit lib 00:49:48.771 16:26:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:49:48.771 16:26:53 -- common/autotest_common.sh@10 -- # set +x 00:49:49.029 16:26:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:49:49.029 16:26:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:49:49.029 16:26:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:49:49.029 16:26:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:49:49.029 16:26:53 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:49:49.029 16:26:53 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:49:49.029 16:26:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:49:49.029 16:26:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:49.029 16:26:53 -- common/autotest_common.sh@10 -- # set +x 00:49:49.029 ************************************ 00:49:49.029 START TEST blockdev_raid5f 00:49:49.029 ************************************ 00:49:49.029 16:26:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:49:49.029 * Looking for test storage... 00:49:49.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:49:49.029 16:26:53 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:49:49.029 16:26:53 -- bdev/nbd_common.sh@6 -- # set -e 00:49:49.029 16:26:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:49:49.029 16:26:53 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:49:49.029 16:26:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:49:49.029 16:26:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:49:49.029 16:26:53 -- bdev/blockdev.sh@18 -- # : 00:49:49.029 16:26:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:49:49.029 16:26:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:49:49.029 16:26:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:49:49.029 16:26:53 -- bdev/blockdev.sh@672 -- # uname -s 00:49:49.029 16:26:53 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:49:49.029 16:26:53 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:49:49.029 16:26:53 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:49:49.029 16:26:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:49:49.029 16:26:53 -- bdev/blockdev.sh@682 -- # dek= 00:49:49.029 16:26:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:49:49.029 16:26:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:49:49.029 16:26:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:49:49.029 16:26:53 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:49:49.029 16:26:53 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:49:49.029 16:26:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:49:49.029 16:26:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=92942 00:49:49.029 16:26:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:49:49.029 16:26:53 -- bdev/blockdev.sh@47 -- # waitforlisten 92942 00:49:49.029 16:26:53 -- common/autotest_common.sh@819 -- # '[' -z 92942 ']' 00:49:49.029 16:26:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:49.029 16:26:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:49:49.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:49.029 16:26:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:49:49.029 16:26:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:49.029 16:26:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:49:49.029 16:26:53 -- common/autotest_common.sh@10 -- # set +x 00:49:49.029 [2024-07-22 16:26:53.244317] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:49.029 [2024-07-22 16:26:53.244697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92942 ] 00:49:49.287 [2024-07-22 16:26:53.411358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:49.545 [2024-07-22 16:26:53.679633] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:49:49.545 [2024-07-22 16:26:53.679916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:50.919 16:26:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:49:50.919 16:26:54 -- common/autotest_common.sh@852 -- # return 0 00:49:50.919 16:26:54 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:49:50.919 16:26:54 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:49:50.919 16:26:54 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:49:50.919 16:26:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:54 -- common/autotest_common.sh@10 -- # set +x 00:49:50.919 Malloc0 00:49:50.919 Malloc1 00:49:50.919 Malloc2 00:49:50.919 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:50.919 16:26:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:49:50.919 16:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:49:50.919 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:50.919 16:26:55 -- bdev/blockdev.sh@738 -- # cat 00:49:50.919 16:26:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:49:50.919 16:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:49:50.919 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:50.919 16:26:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:49:50.919 16:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:49:50.919 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:50.919 16:26:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:49:50.919 16:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:49:50.919 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:50.919 16:26:55 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:49:50.919 16:26:55 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:49:50.919 16:26:55 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:49:50.919 16:26:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:49:50.919 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:49:51.177 16:26:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:49:51.177 16:26:55 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:49:51.177 16:26:55 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4c718a63-50d9-4e7f-89a8-0b258f446fdb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4c718a63-50d9-4e7f-89a8-0b258f446fdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4c718a63-50d9-4e7f-89a8-0b258f446fdb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fc0d1861-da47-4c0c-be69-954d1703aec4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5ba44b7b-4407-4af6-8d0a-421a17d95229",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3c59cc8e-ef1a-49e0-a8dd-017bda5dc1d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:49:51.177 16:26:55 -- bdev/blockdev.sh@747 -- # jq -r .name 00:49:51.177 16:26:55 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:49:51.177 16:26:55 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:49:51.177 16:26:55 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:49:51.177 16:26:55 -- bdev/blockdev.sh@752 -- # killprocess 92942 00:49:51.177 16:26:55 -- common/autotest_common.sh@926 -- # '[' -z 92942 ']' 00:49:51.177 16:26:55 -- common/autotest_common.sh@930 -- # kill -0 92942 00:49:51.178 16:26:55 -- common/autotest_common.sh@931 -- # uname 00:49:51.178 16:26:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:49:51.178 16:26:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92942 00:49:51.178 killing process with pid 92942 00:49:51.178 16:26:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:49:51.178 16:26:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:49:51.178 16:26:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92942' 00:49:51.178 16:26:55 -- common/autotest_common.sh@945 -- # kill 92942 00:49:51.178 16:26:55 -- common/autotest_common.sh@950 -- # wait 92942 00:49:53.707 16:26:57 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:49:53.707 16:26:57 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:49:53.707 16:26:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:49:53.707 16:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:53.707 16:26:57 -- common/autotest_common.sh@10 -- # set +x 00:49:53.707 ************************************ 00:49:53.707 START TEST bdev_hello_world 00:49:53.707 ************************************ 00:49:53.707 16:26:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:49:53.965 [2024-07-22 16:26:57.996417] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:53.965 [2024-07-22 16:26:57.996651] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93016 ] 00:49:53.965 [2024-07-22 16:26:58.170216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:54.223 [2024-07-22 16:26:58.449807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:54.789 [2024-07-22 16:26:59.011818] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:49:54.789 [2024-07-22 16:26:59.011899] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:49:54.789 [2024-07-22 16:26:59.011938] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:49:54.789 [2024-07-22 16:26:59.012626] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:49:54.789 [2024-07-22 16:26:59.012846] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:49:54.789 [2024-07-22 16:26:59.012896] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:49:54.789 [2024-07-22 16:26:59.012977] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:49:54.789 00:49:54.789 [2024-07-22 16:26:59.013024] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:49:56.774 ************************************ 00:49:56.774 END TEST bdev_hello_world 00:49:56.774 ************************************ 00:49:56.774 00:49:56.774 real 0m2.696s 00:49:56.774 user 0m2.175s 00:49:56.774 sys 0m0.412s 00:49:56.774 16:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:49:56.774 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:49:56.774 16:27:00 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:49:56.774 16:27:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:49:56.774 16:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:49:56.774 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:49:56.774 ************************************ 00:49:56.774 START TEST bdev_bounds 00:49:56.774 ************************************ 00:49:56.774 16:27:00 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:49:56.774 16:27:00 -- bdev/blockdev.sh@288 -- # bdevio_pid=93064 00:49:56.774 16:27:00 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:49:56.774 16:27:00 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:49:56.774 Process bdevio pid: 93064 00:49:56.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:56.774 16:27:00 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 93064' 00:49:56.774 16:27:00 -- bdev/blockdev.sh@291 -- # waitforlisten 93064 00:49:56.774 16:27:00 -- common/autotest_common.sh@819 -- # '[' -z 93064 ']' 00:49:56.774 16:27:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:56.774 16:27:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:49:56.774 16:27:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:56.774 16:27:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:49:56.774 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:49:56.774 [2024-07-22 16:27:00.759705] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:49:56.774 [2024-07-22 16:27:00.759941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93064 ] 00:49:56.774 [2024-07-22 16:27:00.932156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:49:57.032 [2024-07-22 16:27:01.248673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:57.032 [2024-07-22 16:27:01.248728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:57.033 [2024-07-22 16:27:01.248734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:49:58.417 16:27:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:49:58.417 16:27:02 -- common/autotest_common.sh@852 -- # return 0 00:49:58.417 16:27:02 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:49:58.417 I/O targets: 00:49:58.417 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:49:58.417 00:49:58.417 00:49:58.417 CUnit - A unit testing framework for C - Version 2.1-3 00:49:58.417 http://cunit.sourceforge.net/ 00:49:58.417 00:49:58.417 00:49:58.417 Suite: bdevio tests on: raid5f 00:49:58.417 Test: blockdev write read block ...passed 00:49:58.417 Test: blockdev write zeroes read block ...passed 00:49:58.417 Test: blockdev write zeroes read no split ...passed 00:49:58.676 Test: blockdev write zeroes read split ...passed 00:49:58.676 Test: blockdev write zeroes read split partial ...passed 00:49:58.676 Test: blockdev reset ...passed 00:49:58.676 Test: blockdev write read 8 blocks ...passed 00:49:58.676 Test: blockdev write read size > 128k ...passed 00:49:58.676 Test: blockdev write read invalid size ...passed 00:49:58.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:49:58.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:49:58.676 Test: blockdev write read max offset ...passed 00:49:58.676 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:49:58.676 Test: blockdev writev readv 8 blocks ...passed 00:49:58.676 Test: blockdev writev readv 30 x 1block ...passed 00:49:58.676 Test: blockdev writev readv block ...passed 00:49:58.676 Test: blockdev writev readv size > 128k ...passed 00:49:58.676 Test: blockdev writev readv size > 128k in two iovs ...passed 00:49:58.676 Test: blockdev comparev and writev ...passed 00:49:58.676 Test: blockdev nvme passthru rw ...passed 00:49:58.676 Test: blockdev nvme passthru vendor specific ...passed 00:49:58.676 Test: blockdev nvme admin passthru ...passed 00:49:58.676 Test: blockdev copy ...passed 00:49:58.676 00:49:58.676 Run Summary: Type Total Ran Passed Failed Inactive 00:49:58.676 suites 1 1 n/a 0 0 00:49:58.676 tests 23 23 23 0 0 00:49:58.676 asserts 130 130 130 0 n/a 00:49:58.676 00:49:58.676 Elapsed time = 0.619 seconds 00:49:58.676 0 00:49:58.676 16:27:02 -- bdev/blockdev.sh@293 -- # killprocess 93064 00:49:58.676 16:27:02 -- common/autotest_common.sh@926 -- # '[' -z 93064 ']' 00:49:58.676 16:27:02 -- common/autotest_common.sh@930 -- # kill -0 93064 00:49:58.676 16:27:02 -- common/autotest_common.sh@931 -- # uname 00:49:58.676 16:27:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:49:58.676 16:27:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93064 00:49:58.933 16:27:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:49:58.933 killing process with pid 93064 00:49:58.933 16:27:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:49:58.933 16:27:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93064' 00:49:58.933 16:27:02 -- common/autotest_common.sh@945 -- # kill 93064 00:49:58.933 16:27:02 -- common/autotest_common.sh@950 -- # wait 93064 00:50:00.306 16:27:04 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:50:00.306 00:50:00.306 real 0m3.877s 00:50:00.306 user 0m9.590s 00:50:00.306 sys 0m0.606s 00:50:00.306 16:27:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:00.306 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:50:00.306 ************************************ 00:50:00.306 END TEST bdev_bounds 00:50:00.306 ************************************ 00:50:00.564 16:27:04 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:00.564 16:27:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:50:00.564 16:27:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:00.564 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:50:00.564 ************************************ 00:50:00.564 START TEST bdev_nbd 00:50:00.564 ************************************ 00:50:00.564 16:27:04 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:50:00.564 16:27:04 -- bdev/blockdev.sh@298 -- # uname -s 00:50:00.564 16:27:04 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:50:00.564 16:27:04 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:00.564 16:27:04 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:00.564 16:27:04 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:50:00.564 16:27:04 -- bdev/blockdev.sh@302 -- # local bdev_all 00:50:00.564 16:27:04 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:50:00.564 16:27:04 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:50:00.564 16:27:04 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:50:00.564 16:27:04 -- bdev/blockdev.sh@309 -- # local nbd_all 00:50:00.564 16:27:04 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:50:00.564 16:27:04 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:50:00.564 16:27:04 -- bdev/blockdev.sh@312 -- # local nbd_list 00:50:00.564 16:27:04 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:50:00.564 16:27:04 -- bdev/blockdev.sh@313 -- # local bdev_list 00:50:00.564 16:27:04 -- bdev/blockdev.sh@316 -- # nbd_pid=93133 00:50:00.564 16:27:04 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:50:00.564 16:27:04 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:50:00.564 16:27:04 -- bdev/blockdev.sh@318 -- # waitforlisten 93133 /var/tmp/spdk-nbd.sock 00:50:00.564 16:27:04 -- common/autotest_common.sh@819 -- # '[' -z 93133 ']' 00:50:00.564 16:27:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:50:00.564 16:27:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:50:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:50:00.564 16:27:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:50:00.564 16:27:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:50:00.564 16:27:04 -- common/autotest_common.sh@10 -- # set +x 00:50:00.564 [2024-07-22 16:27:04.652036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:00.564 [2024-07-22 16:27:04.652183] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:00.564 [2024-07-22 16:27:04.818768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:01.130 [2024-07-22 16:27:05.145976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:02.527 16:27:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:50:02.527 16:27:06 -- common/autotest_common.sh@852 -- # return 0 00:50:02.527 16:27:06 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@24 -- # local i 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:50:02.527 16:27:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:50:02.527 16:27:06 -- common/autotest_common.sh@857 -- # local i 00:50:02.527 16:27:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:50:02.527 16:27:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:50:02.527 16:27:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:50:02.527 16:27:06 -- common/autotest_common.sh@861 -- # break 00:50:02.527 16:27:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:50:02.527 16:27:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:50:02.527 16:27:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:02.527 1+0 records in 00:50:02.527 1+0 records out 00:50:02.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00198482 s, 2.1 MB/s 00:50:02.527 16:27:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:02.527 16:27:06 -- common/autotest_common.sh@874 -- # size=4096 00:50:02.527 16:27:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:02.527 16:27:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:50:02.527 16:27:06 -- common/autotest_common.sh@877 -- # return 0 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:50:02.527 16:27:06 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:50:02.785 { 00:50:02.785 "nbd_device": "/dev/nbd0", 00:50:02.785 "bdev_name": "raid5f" 00:50:02.785 } 00:50:02.785 ]' 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@119 -- # echo '[ 00:50:02.785 { 00:50:02.785 "nbd_device": "/dev/nbd0", 00:50:02.785 "bdev_name": "raid5f" 00:50:02.785 } 00:50:02.785 ]' 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@51 -- # local i 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:02.785 16:27:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@41 -- # break 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@45 -- # return 0 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:03.043 16:27:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@65 -- # true 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@65 -- # count=0 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@122 -- # count=0 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@127 -- # return 0 00:50:03.301 16:27:07 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@12 -- # local i 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:03.301 16:27:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:50:03.559 /dev/nbd0 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:50:03.559 16:27:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:50:03.559 16:27:07 -- common/autotest_common.sh@857 -- # local i 00:50:03.559 16:27:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:50:03.559 16:27:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:50:03.559 16:27:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:50:03.559 16:27:07 -- common/autotest_common.sh@861 -- # break 00:50:03.559 16:27:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:50:03.559 16:27:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:50:03.559 16:27:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:50:03.559 1+0 records in 00:50:03.559 1+0 records out 00:50:03.559 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361502 s, 11.3 MB/s 00:50:03.559 16:27:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:03.559 16:27:07 -- common/autotest_common.sh@874 -- # size=4096 00:50:03.559 16:27:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:50:03.559 16:27:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:50:03.559 16:27:07 -- common/autotest_common.sh@877 -- # return 0 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:03.559 16:27:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:50:04.125 { 00:50:04.125 "nbd_device": "/dev/nbd0", 00:50:04.125 "bdev_name": "raid5f" 00:50:04.125 } 00:50:04.125 ]' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:50:04.125 { 00:50:04.125 "nbd_device": "/dev/nbd0", 00:50:04.125 "bdev_name": "raid5f" 00:50:04.125 } 00:50:04.125 ]' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@65 -- # count=1 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@66 -- # echo 1 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@95 -- # count=1 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:50:04.125 256+0 records in 00:50:04.125 256+0 records out 00:50:04.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073963 s, 142 MB/s 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:50:04.125 256+0 records in 00:50:04.125 256+0 records out 00:50:04.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0444558 s, 23.6 MB/s 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@51 -- # local i 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:04.125 16:27:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@41 -- # break 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@45 -- # return 0 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:04.384 16:27:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@65 -- # true 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@65 -- # count=0 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@104 -- # count=0 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@109 -- # return 0 00:50:04.642 16:27:08 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:50:04.642 16:27:08 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:50:04.900 malloc_lvol_verify 00:50:04.900 16:27:09 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:50:05.158 629a7421-5821-4cb4-8795-96fc66100402 00:50:05.158 16:27:09 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:50:05.416 8ed7ffb0-1003-4fad-aa50-8478a9693a46 00:50:05.416 16:27:09 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:50:05.674 /dev/nbd0 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:50:05.674 mke2fs 1.47.0 (5-Feb-2023) 00:50:05.674 00:50:05.674 Filesystem too small for a journal 00:50:05.674 Discarding device blocks: 0/1024 done 00:50:05.674 Creating filesystem with 1024 4k blocks and 1024 inodes 00:50:05.674 00:50:05.674 Allocating group tables: 0/1 done 00:50:05.674 Writing inode tables: 0/1 done 00:50:05.674 Writing superblocks and filesystem accounting information: 0/1 done 00:50:05.674 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@51 -- # local i 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:50:05.674 16:27:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@41 -- # break 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@45 -- # return 0 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:50:05.932 16:27:10 -- bdev/nbd_common.sh@147 -- # return 0 00:50:05.932 16:27:10 -- bdev/blockdev.sh@324 -- # killprocess 93133 00:50:05.932 16:27:10 -- common/autotest_common.sh@926 -- # '[' -z 93133 ']' 00:50:05.932 16:27:10 -- common/autotest_common.sh@930 -- # kill -0 93133 00:50:05.932 16:27:10 -- common/autotest_common.sh@931 -- # uname 00:50:05.932 16:27:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:50:05.932 16:27:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93133 00:50:05.932 16:27:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:50:05.932 16:27:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:50:05.932 16:27:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93133' 00:50:05.932 killing process with pid 93133 00:50:05.932 16:27:10 -- common/autotest_common.sh@945 -- # kill 93133 00:50:05.932 16:27:10 -- common/autotest_common.sh@950 -- # wait 93133 00:50:07.834 16:27:11 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:50:07.834 00:50:07.834 real 0m7.112s 00:50:07.834 user 0m9.843s 00:50:07.834 sys 0m1.473s 00:50:07.834 16:27:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:07.834 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:50:07.834 ************************************ 00:50:07.834 END TEST bdev_nbd 00:50:07.834 ************************************ 00:50:07.834 16:27:11 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:50:07.834 16:27:11 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:50:07.834 16:27:11 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:50:07.834 16:27:11 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:07.834 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:50:07.834 ************************************ 00:50:07.834 START TEST bdev_fio 00:50:07.834 ************************************ 00:50:07.834 16:27:11 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:50:07.834 16:27:11 -- bdev/blockdev.sh@329 -- # local env_context 00:50:07.834 16:27:11 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:50:07.834 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:50:07.834 16:27:11 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:50:07.834 16:27:11 -- bdev/blockdev.sh@337 -- # echo '' 00:50:07.834 16:27:11 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:50:07.834 16:27:11 -- bdev/blockdev.sh@337 -- # env_context= 00:50:07.834 16:27:11 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:07.834 16:27:11 -- common/autotest_common.sh@1260 -- # local workload=verify 00:50:07.834 16:27:11 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:50:07.834 16:27:11 -- common/autotest_common.sh@1262 -- # local env_context= 00:50:07.834 16:27:11 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:50:07.834 16:27:11 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:50:07.834 16:27:11 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:07.835 16:27:11 -- common/autotest_common.sh@1280 -- # cat 00:50:07.835 16:27:11 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:50:07.835 16:27:11 -- common/autotest_common.sh@1293 -- # cat 00:50:07.835 16:27:11 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:50:07.835 16:27:11 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:50:07.835 16:27:11 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:50:07.835 16:27:11 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:50:07.835 16:27:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:50:07.835 16:27:11 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:50:07.835 16:27:11 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:50:07.835 16:27:11 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:50:07.835 16:27:11 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:07.835 16:27:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:50:07.835 16:27:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:07.835 16:27:11 -- common/autotest_common.sh@10 -- # set +x 00:50:07.835 ************************************ 00:50:07.835 START TEST bdev_fio_rw_verify 00:50:07.835 ************************************ 00:50:07.835 16:27:11 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:07.835 16:27:11 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:07.835 16:27:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:50:07.835 16:27:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:50:07.835 16:27:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:50:07.835 16:27:11 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:07.835 16:27:11 -- common/autotest_common.sh@1320 -- # shift 00:50:07.835 16:27:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:50:07.835 16:27:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:50:07.835 16:27:11 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:50:07.835 16:27:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:50:07.835 16:27:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:50:07.835 16:27:11 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:50:07.835 16:27:11 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:50:07.835 16:27:11 -- common/autotest_common.sh@1326 -- # break 00:50:07.835 16:27:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:50:07.835 16:27:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:50:07.835 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:50:07.835 fio-3.35 00:50:07.835 Starting 1 thread 00:50:20.028 00:50:20.028 job_raid5f: (groupid=0, jobs=1): err= 0: pid=93371: Mon Jul 22 16:27:22 2024 00:50:20.028 read: IOPS=8364, BW=32.7MiB/s (34.3MB/s)(327MiB/10001msec) 00:50:20.028 slat (usec): min=26, max=855, avg=30.55, stdev= 7.90 00:50:20.028 clat (usec): min=13, max=1448, avg=190.72, stdev=85.83 00:50:20.028 lat (usec): min=43, max=1481, avg=221.27, stdev=89.52 00:50:20.028 clat percentiles (usec): 00:50:20.028 | 50.000th=[ 192], 99.000th=[ 392], 99.900th=[ 930], 99.990th=[ 979], 00:50:20.028 | 99.999th=[ 1450] 00:50:20.028 write: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(340MiB/9869msec); 0 zone resets 00:50:20.028 slat (usec): min=12, max=310, avg=24.10, stdev= 5.65 00:50:20.028 clat (usec): min=81, max=1544, avg=428.19, stdev=67.26 00:50:20.028 lat (usec): min=103, max=1601, avg=452.29, stdev=69.45 00:50:20.028 clat percentiles (usec): 00:50:20.028 | 50.000th=[ 433], 99.000th=[ 594], 99.900th=[ 1270], 99.990th=[ 1450], 00:50:20.028 | 99.999th=[ 1549] 00:50:20.028 bw ( KiB/s): min=32536, max=37912, per=98.25%, avg=34690.53, stdev=1549.90, samples=19 00:50:20.028 iops : min= 8134, max= 9478, avg=8672.63, stdev=387.47, samples=19 00:50:20.028 lat (usec) : 20=0.01%, 50=0.01%, 100=5.96%, 250=30.25%, 500=60.85% 00:50:20.028 lat (usec) : 750=2.67%, 1000=0.17% 00:50:20.028 lat (msec) : 2=0.09% 00:50:20.028 cpu : usr=99.55%, sys=0.44%, ctx=16, majf=0, minf=7317 00:50:20.028 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:50:20.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:20.028 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:20.028 issued rwts: total=83657,87110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:20.028 latency : target=0, window=0, percentile=100.00%, depth=8 00:50:20.028 00:50:20.028 Run status group 0 (all jobs): 00:50:20.028 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=327MiB (343MB), run=10001-10001msec 00:50:20.028 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=340MiB (357MB), run=9869-9869msec 00:50:20.595 ----------------------------------------------------- 00:50:20.595 Suppressions used: 00:50:20.595 count bytes template 00:50:20.595 1 7 /usr/src/fio/parse.c 00:50:20.595 842 80832 /usr/src/fio/iolog.c 00:50:20.595 1 904 libcrypto.so 00:50:20.595 ----------------------------------------------------- 00:50:20.595 00:50:20.595 00:50:20.595 real 0m12.895s 00:50:20.595 user 0m13.757s 00:50:20.595 sys 0m0.714s 00:50:20.595 16:27:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:20.595 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:50:20.595 ************************************ 00:50:20.595 END TEST bdev_fio_rw_verify 00:50:20.595 ************************************ 00:50:20.595 16:27:24 -- bdev/blockdev.sh@348 -- # rm -f 00:50:20.595 16:27:24 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.595 16:27:24 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.595 16:27:24 -- common/autotest_common.sh@1260 -- # local workload=trim 00:50:20.595 16:27:24 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:50:20.595 16:27:24 -- common/autotest_common.sh@1262 -- # local env_context= 00:50:20.595 16:27:24 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:50:20.595 16:27:24 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.595 16:27:24 -- common/autotest_common.sh@1280 -- # cat 00:50:20.595 16:27:24 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:50:20.595 16:27:24 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:50:20.595 16:27:24 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4c718a63-50d9-4e7f-89a8-0b258f446fdb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4c718a63-50d9-4e7f-89a8-0b258f446fdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4c718a63-50d9-4e7f-89a8-0b258f446fdb",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fc0d1861-da47-4c0c-be69-954d1703aec4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "5ba44b7b-4407-4af6-8d0a-421a17d95229",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "3c59cc8e-ef1a-49e0-a8dd-017bda5dc1d0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:50:20.595 16:27:24 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:50:20.595 16:27:24 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:50:20.595 /home/vagrant/spdk_repo/spdk 00:50:20.595 16:27:24 -- bdev/blockdev.sh@360 -- # popd 00:50:20.595 16:27:24 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:50:20.595 16:27:24 -- bdev/blockdev.sh@362 -- # return 0 00:50:20.595 00:50:20.595 real 0m13.030s 00:50:20.595 user 0m13.805s 00:50:20.595 sys 0m0.804s 00:50:20.595 16:27:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:20.595 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:50:20.595 ************************************ 00:50:20.595 END TEST bdev_fio 00:50:20.595 ************************************ 00:50:20.595 16:27:24 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:50:20.595 16:27:24 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:50:20.595 16:27:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:20.595 16:27:24 -- common/autotest_common.sh@10 -- # set +x 00:50:20.595 ************************************ 00:50:20.595 START TEST bdev_verify 00:50:20.595 ************************************ 00:50:20.595 16:27:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:50:20.853 [2024-07-22 16:27:24.901553] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:20.853 [2024-07-22 16:27:24.901715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93520 ] 00:50:20.853 [2024-07-22 16:27:25.064104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:21.111 [2024-07-22 16:27:25.341420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:21.111 [2024-07-22 16:27:25.341476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:21.676 Running I/O for 5 seconds... 00:50:26.937 00:50:26.937 Latency(us) 00:50:26.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:26.937 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:50:26.937 Verification LBA range: start 0x0 length 0x2000 00:50:26.937 raid5f : 5.02 7135.54 27.87 0.00 0.00 28424.19 551.10 24546.21 00:50:26.937 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:50:26.937 Verification LBA range: start 0x2000 length 0x2000 00:50:26.937 raid5f : 5.02 6863.52 26.81 0.00 0.00 29560.26 372.36 24665.37 00:50:26.937 =================================================================================================================== 00:50:26.937 Total : 13999.05 54.68 0.00 0.00 28981.23 372.36 24665.37 00:50:28.331 00:50:28.331 real 0m7.693s 00:50:28.331 user 0m13.881s 00:50:28.331 sys 0m0.391s 00:50:28.331 16:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:28.331 ************************************ 00:50:28.331 END TEST bdev_verify 00:50:28.331 ************************************ 00:50:28.331 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:50:28.331 16:27:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:50:28.331 16:27:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:50:28.331 16:27:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:28.331 16:27:32 -- common/autotest_common.sh@10 -- # set +x 00:50:28.331 ************************************ 00:50:28.331 START TEST bdev_verify_big_io 00:50:28.331 ************************************ 00:50:28.331 16:27:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:50:28.590 [2024-07-22 16:27:32.666517] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:28.590 [2024-07-22 16:27:32.666751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93619 ] 00:50:28.590 [2024-07-22 16:27:32.852975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:50:29.165 [2024-07-22 16:27:33.140689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:29.165 [2024-07-22 16:27:33.140709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:29.739 Running I/O for 5 seconds... 00:50:35.004 00:50:35.004 Latency(us) 00:50:35.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:35.004 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:50:35.004 Verification LBA range: start 0x0 length 0x200 00:50:35.004 raid5f : 5.16 553.98 34.62 0.00 0.00 6020329.32 224.35 208761.95 00:50:35.004 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:50:35.004 Verification LBA range: start 0x200 length 0x200 00:50:35.004 raid5f : 5.16 560.43 35.03 0.00 0.00 5953205.36 231.80 217341.21 00:50:35.004 =================================================================================================================== 00:50:35.004 Total : 1114.41 69.65 0.00 0.00 5986598.10 224.35 217341.21 00:50:36.378 00:50:36.378 real 0m7.834s 00:50:36.378 user 0m14.131s 00:50:36.378 sys 0m0.415s 00:50:36.378 ************************************ 00:50:36.378 END TEST bdev_verify_big_io 00:50:36.378 ************************************ 00:50:36.378 16:27:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:36.378 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:50:36.378 16:27:40 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:36.378 16:27:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:50:36.378 16:27:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:36.378 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:50:36.378 ************************************ 00:50:36.378 START TEST bdev_write_zeroes 00:50:36.378 ************************************ 00:50:36.378 16:27:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:36.378 [2024-07-22 16:27:40.535952] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:36.378 [2024-07-22 16:27:40.536122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93717 ] 00:50:36.636 [2024-07-22 16:27:40.707053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:36.894 [2024-07-22 16:27:40.991241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:37.460 Running I/O for 1 seconds... 00:50:38.393 00:50:38.393 Latency(us) 00:50:38.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:38.393 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:50:38.393 raid5f : 1.01 20040.36 78.28 0.00 0.00 6362.11 1951.19 7596.22 00:50:38.393 =================================================================================================================== 00:50:38.393 Total : 20040.36 78.28 0.00 0.00 6362.11 1951.19 7596.22 00:50:40.299 00:50:40.299 real 0m3.598s 00:50:40.299 user 0m3.134s 00:50:40.299 sys 0m0.356s 00:50:40.299 16:27:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:40.299 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:50:40.299 ************************************ 00:50:40.299 END TEST bdev_write_zeroes 00:50:40.299 ************************************ 00:50:40.299 16:27:44 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:40.299 16:27:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:50:40.299 16:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:40.299 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:50:40.299 ************************************ 00:50:40.299 START TEST bdev_json_nonenclosed 00:50:40.299 ************************************ 00:50:40.299 16:27:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:40.299 [2024-07-22 16:27:44.191806] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:40.299 [2024-07-22 16:27:44.192031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93766 ] 00:50:40.299 [2024-07-22 16:27:44.369940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:40.557 [2024-07-22 16:27:44.619842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:40.557 [2024-07-22 16:27:44.620106] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:50:40.557 [2024-07-22 16:27:44.620137] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:50:40.816 00:50:40.816 real 0m0.955s 00:50:40.816 user 0m0.687s 00:50:40.816 sys 0m0.167s 00:50:40.816 16:27:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:40.816 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:50:40.816 ************************************ 00:50:40.816 END TEST bdev_json_nonenclosed 00:50:40.816 ************************************ 00:50:41.074 16:27:45 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:41.074 16:27:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:50:41.074 16:27:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:50:41.074 16:27:45 -- common/autotest_common.sh@10 -- # set +x 00:50:41.074 ************************************ 00:50:41.074 START TEST bdev_json_nonarray 00:50:41.074 ************************************ 00:50:41.074 16:27:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:50:41.074 [2024-07-22 16:27:45.196681] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:50:41.074 [2024-07-22 16:27:45.196904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93797 ] 00:50:41.333 [2024-07-22 16:27:45.373314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:41.592 [2024-07-22 16:27:45.616902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:41.592 [2024-07-22 16:27:45.617166] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:50:41.592 [2024-07-22 16:27:45.617204] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:50:41.851 00:50:41.851 real 0m0.939s 00:50:41.851 user 0m0.691s 00:50:41.851 sys 0m0.148s 00:50:41.851 16:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:41.851 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:50:41.851 ************************************ 00:50:41.851 END TEST bdev_json_nonarray 00:50:41.851 ************************************ 00:50:41.851 16:27:46 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:50:41.851 16:27:46 -- bdev/blockdev.sh@809 -- # cleanup 00:50:41.851 16:27:46 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:50:41.851 16:27:46 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:50:41.851 16:27:46 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:50:41.851 16:27:46 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:50:41.851 00:50:41.851 real 0m53.041s 00:50:41.851 user 1m12.888s 00:50:41.851 sys 0m5.931s 00:50:41.851 16:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:50:41.851 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:50:41.851 ************************************ 00:50:41.851 END TEST blockdev_raid5f 00:50:41.851 ************************************ 00:50:42.109 16:27:46 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:50:42.110 16:27:46 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:50:42.110 16:27:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:50:42.110 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:50:42.110 16:27:46 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:50:42.110 16:27:46 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:50:42.110 16:27:46 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:50:42.110 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:50:44.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:50:44.011 Waiting for block devices as requested 00:50:44.011 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:50:44.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:50:44.270 Cleaning 00:50:44.270 Removing: /var/run/dpdk/spdk0/config 00:50:44.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:44.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:44.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:44.270 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:44.270 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:44.270 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:44.270 Removing: /dev/shm/spdk_tgt_trace.pid61235 00:50:44.270 Removing: /var/run/dpdk/spdk0 00:50:44.270 Removing: /var/run/dpdk/spdk_pid61009 00:50:44.270 Removing: /var/run/dpdk/spdk_pid61235 00:50:44.270 Removing: /var/run/dpdk/spdk_pid61497 00:50:44.270 Removing: /var/run/dpdk/spdk_pid61750 00:50:44.270 Removing: /var/run/dpdk/spdk_pid61937 00:50:44.270 Removing: /var/run/dpdk/spdk_pid62042 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62146 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62268 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62375 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62420 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62462 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62529 00:50:44.529 Removing: /var/run/dpdk/spdk_pid62635 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63131 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63214 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63296 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63319 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63469 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63493 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63649 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63673 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63742 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63768 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63832 00:50:44.529 Removing: /var/run/dpdk/spdk_pid63863 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64047 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64089 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64131 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64214 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64297 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64334 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64412 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64448 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64490 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64527 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64574 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64604 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64652 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64688 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64736 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64763 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64814 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64846 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64892 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64924 00:50:44.529 Removing: /var/run/dpdk/spdk_pid64976 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65008 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65054 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65090 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65138 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65175 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65222 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65259 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65300 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65337 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65384 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65421 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65468 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65499 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65546 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65583 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65630 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65661 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65709 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65748 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65802 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65838 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65887 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65919 00:50:44.529 Removing: /var/run/dpdk/spdk_pid65971 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66003 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66056 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66137 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66258 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66434 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66519 00:50:44.529 Removing: /var/run/dpdk/spdk_pid66583 00:50:44.529 Removing: /var/run/dpdk/spdk_pid67798 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68003 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68200 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68315 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68450 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68520 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68557 00:50:44.529 Removing: /var/run/dpdk/spdk_pid68588 00:50:44.529 Removing: /var/run/dpdk/spdk_pid69006 00:50:44.529 Removing: /var/run/dpdk/spdk_pid69093 00:50:44.529 Removing: /var/run/dpdk/spdk_pid69206 00:50:44.529 Removing: /var/run/dpdk/spdk_pid69265 00:50:44.529 Removing: /var/run/dpdk/spdk_pid70401 00:50:44.529 Removing: /var/run/dpdk/spdk_pid71245 00:50:44.529 Removing: /var/run/dpdk/spdk_pid72072 00:50:44.529 Removing: /var/run/dpdk/spdk_pid73121 00:50:44.788 Removing: /var/run/dpdk/spdk_pid74125 00:50:44.788 Removing: /var/run/dpdk/spdk_pid75136 00:50:44.788 Removing: /var/run/dpdk/spdk_pid76539 00:50:44.788 Removing: /var/run/dpdk/spdk_pid77692 00:50:44.788 Removing: /var/run/dpdk/spdk_pid78833 00:50:44.788 Removing: /var/run/dpdk/spdk_pid79465 00:50:44.788 Removing: /var/run/dpdk/spdk_pid79990 00:50:44.788 Removing: /var/run/dpdk/spdk_pid80598 00:50:44.788 Removing: /var/run/dpdk/spdk_pid81053 00:50:44.788 Removing: /var/run/dpdk/spdk_pid81584 00:50:44.788 Removing: /var/run/dpdk/spdk_pid82106 00:50:44.788 Removing: /var/run/dpdk/spdk_pid82731 00:50:44.788 Removing: /var/run/dpdk/spdk_pid83211 00:50:44.788 Removing: /var/run/dpdk/spdk_pid84500 00:50:44.788 Removing: /var/run/dpdk/spdk_pid85072 00:50:44.788 Removing: /var/run/dpdk/spdk_pid85586 00:50:44.788 Removing: /var/run/dpdk/spdk_pid87013 00:50:44.788 Removing: /var/run/dpdk/spdk_pid87644 00:50:44.788 Removing: /var/run/dpdk/spdk_pid88209 00:50:44.788 Removing: /var/run/dpdk/spdk_pid88926 00:50:44.788 Removing: /var/run/dpdk/spdk_pid88979 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89037 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89095 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89220 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89374 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89590 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89857 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89870 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89919 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89948 00:50:44.788 Removing: /var/run/dpdk/spdk_pid89975 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90011 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90041 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90071 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90097 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90134 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90159 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90195 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90220 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90251 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90287 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90316 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90343 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90379 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90408 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90435 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90482 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90511 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90552 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90632 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90675 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90698 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90744 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90766 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90791 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90844 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90873 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90918 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90939 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90964 00:50:44.788 Removing: /var/run/dpdk/spdk_pid90984 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91009 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91039 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91060 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91085 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91124 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91168 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91194 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91236 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91263 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91289 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91347 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91371 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91415 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91440 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91461 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91491 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91515 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91536 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91561 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91587 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91676 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91775 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91936 00:50:44.788 Removing: /var/run/dpdk/spdk_pid91962 00:50:44.788 Removing: /var/run/dpdk/spdk_pid92013 00:50:44.788 Removing: /var/run/dpdk/spdk_pid92070 00:50:44.788 Removing: /var/run/dpdk/spdk_pid92108 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92140 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92177 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92220 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92252 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92331 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92393 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92442 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92683 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92796 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92841 00:50:45.047 Removing: /var/run/dpdk/spdk_pid92942 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93016 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93064 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93357 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93520 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93619 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93717 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93766 00:50:45.047 Removing: /var/run/dpdk/spdk_pid93797 00:50:45.047 Clean 00:50:45.047 killing process with pid 51509 00:50:45.047 killing process with pid 51510 00:50:45.047 16:27:49 -- common/autotest_common.sh@1436 -- # return 0 00:50:45.047 16:27:49 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:50:45.047 16:27:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:45.047 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:50:45.047 16:27:49 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:50:45.047 16:27:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:50:45.047 16:27:49 -- common/autotest_common.sh@10 -- # set +x 00:50:45.305 16:27:49 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:45.305 16:27:49 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:50:45.305 16:27:49 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:50:45.305 16:27:49 -- spdk/autotest.sh@394 -- # hash lcov 00:50:45.305 16:27:49 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:50:45.305 16:27:49 -- spdk/autotest.sh@396 -- # hostname 00:50:45.305 16:27:49 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:50:45.563 geninfo: WARNING: invalid characters removed from testname! 00:51:53.342 16:28:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:53.342 16:28:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:53.342 16:28:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:55.876 16:28:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:59.219 16:29:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:02.292 16:29:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:52:05.619 16:29:09 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:52:05.619 16:29:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:05.619 16:29:09 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:52:05.619 16:29:09 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:05.619 16:29:09 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:05.619 16:29:09 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:05.619 16:29:09 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:05.619 16:29:09 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:05.619 16:29:09 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:05.619 16:29:09 -- paths/export.sh@6 -- $ export PATH 00:52:05.619 16:29:09 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:52:05.619 16:29:09 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:52:05.619 16:29:09 -- common/autobuild_common.sh@438 -- $ date +%s 00:52:05.619 16:29:09 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721665749.XXXXXX 00:52:05.619 16:29:09 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721665749.s99GyD 00:52:05.619 16:29:09 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:52:05.619 16:29:09 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:52:05.619 16:29:09 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:52:05.619 16:29:09 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:52:05.619 16:29:09 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:52:05.619 16:29:09 -- common/autobuild_common.sh@454 -- $ get_config_params 00:52:05.619 16:29:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:52:05.619 16:29:09 -- common/autotest_common.sh@10 -- $ set +x 00:52:05.619 16:29:09 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:52:05.619 16:29:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:52:05.619 16:29:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:52:05.619 16:29:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:52:05.619 16:29:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:52:05.619 16:29:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:52:05.619 16:29:09 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:52:05.619 16:29:09 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:52:05.619 16:29:09 -- common/autotest_common.sh@10 -- $ set +x 00:52:05.619 16:29:09 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:52:05.619 16:29:09 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:52:05.619 16:29:09 -- spdk/autopackage.sh@40 -- $ get_config_params 00:52:05.619 16:29:09 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:52:05.619 16:29:09 -- common/autotest_common.sh@10 -- $ set +x 00:52:05.619 16:29:09 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:52:05.619 16:29:09 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:52:05.619 16:29:09 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --enable-lto 00:52:05.619 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:52:05.619 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:52:05.878 Using 'verbs' RDMA provider 00:52:19.014 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:52:31.216 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:52:31.216 Creating mk/config.mk...done. 00:52:31.216 Creating mk/cc.flags.mk...done. 00:52:31.216 Type 'make' to build. 00:52:31.216 16:29:34 -- spdk/autopackage.sh@43 -- $ make -j10 00:52:31.216 make[1]: Nothing to be done for 'all'. 00:52:36.575 The Meson build system 00:52:36.575 Version: 1.4.1 00:52:36.575 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:52:36.575 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:52:36.575 Build type: native build 00:52:36.575 Program cat found: YES (/usr/bin/cat) 00:52:36.575 Project name: DPDK 00:52:36.575 Project version: 23.11.0 00:52:36.575 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:52:36.575 C linker for the host machine: cc ld.bfd 2.42 00:52:36.575 Host machine cpu family: x86_64 00:52:36.575 Host machine cpu: x86_64 00:52:36.575 Message: ## Building in Developer Mode ## 00:52:36.575 Program pkg-config found: YES (/usr/bin/pkg-config) 00:52:36.575 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:52:36.575 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:52:36.575 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:52:36.575 Program cat found: YES (/usr/bin/cat) 00:52:36.575 Compiler for C supports arguments -march=native: YES 00:52:36.575 Checking for size of "void *" : 8 00:52:36.575 Checking for size of "void *" : 8 (cached) 00:52:36.575 Library m found: YES 00:52:36.575 Library numa found: YES 00:52:36.575 Has header "numaif.h" : YES 00:52:36.575 Library fdt found: NO 00:52:36.575 Library execinfo found: NO 00:52:36.575 Has header "execinfo.h" : YES 00:52:36.575 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:52:36.575 Run-time dependency libarchive found: NO (tried pkgconfig) 00:52:36.575 Run-time dependency libbsd found: NO (tried pkgconfig) 00:52:36.575 Run-time dependency jansson found: NO (tried pkgconfig) 00:52:36.575 Run-time dependency openssl found: YES 3.0.13 00:52:36.575 Run-time dependency libpcap found: NO (tried pkgconfig) 00:52:36.575 Library pcap found: NO 00:52:36.575 Compiler for C supports arguments -Wcast-qual: YES 00:52:36.575 Compiler for C supports arguments -Wdeprecated: YES 00:52:36.575 Compiler for C supports arguments -Wformat: YES 00:52:36.575 Compiler for C supports arguments -Wformat-nonliteral: YES 00:52:36.575 Compiler for C supports arguments -Wformat-security: YES 00:52:36.575 Compiler for C supports arguments -Wmissing-declarations: YES 00:52:36.575 Compiler for C supports arguments -Wmissing-prototypes: YES 00:52:36.576 Compiler for C supports arguments -Wnested-externs: YES 00:52:36.576 Compiler for C supports arguments -Wold-style-definition: YES 00:52:36.576 Compiler for C supports arguments -Wpointer-arith: YES 00:52:36.576 Compiler for C supports arguments -Wsign-compare: YES 00:52:36.576 Compiler for C supports arguments -Wstrict-prototypes: YES 00:52:36.576 Compiler for C supports arguments -Wundef: YES 00:52:36.576 Compiler for C supports arguments -Wwrite-strings: YES 00:52:36.576 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:52:36.576 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:52:36.576 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:52:36.576 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:52:36.576 Program objdump found: YES (/usr/bin/objdump) 00:52:36.576 Compiler for C supports arguments -mavx512f: YES 00:52:36.576 Checking if "AVX512 checking" compiles: YES 00:52:36.576 Fetching value of define "__SSE4_2__" : 1 00:52:36.576 Fetching value of define "__AES__" : 1 00:52:36.576 Fetching value of define "__AVX__" : 1 00:52:36.576 Fetching value of define "__AVX2__" : 1 00:52:36.576 Fetching value of define "__AVX512BW__" : (undefined) 00:52:36.576 Fetching value of define "__AVX512CD__" : (undefined) 00:52:36.576 Fetching value of define "__AVX512DQ__" : (undefined) 00:52:36.576 Fetching value of define "__AVX512F__" : (undefined) 00:52:36.576 Fetching value of define "__AVX512VL__" : (undefined) 00:52:36.576 Fetching value of define "__PCLMUL__" : 1 00:52:36.576 Fetching value of define "__RDRND__" : 1 00:52:36.576 Fetching value of define "__RDSEED__" : 1 00:52:36.576 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:52:36.576 Fetching value of define "__znver1__" : (undefined) 00:52:36.576 Fetching value of define "__znver2__" : (undefined) 00:52:36.576 Fetching value of define "__znver3__" : (undefined) 00:52:36.576 Fetching value of define "__znver4__" : (undefined) 00:52:36.576 Compiler for C supports arguments -ffat-lto-objects: YES 00:52:36.576 Library asan found: YES 00:52:36.576 Compiler for C supports arguments -Wno-format-truncation: YES 00:52:36.576 Message: lib/log: Defining dependency "log" 00:52:36.576 Message: lib/kvargs: Defining dependency "kvargs" 00:52:36.576 Message: lib/telemetry: Defining dependency "telemetry" 00:52:36.576 Library rt found: YES 00:52:36.576 Checking for function "getentropy" : NO 00:52:36.576 Message: lib/eal: Defining dependency "eal" 00:52:36.576 Message: lib/ring: Defining dependency "ring" 00:52:36.576 Message: lib/rcu: Defining dependency "rcu" 00:52:36.576 Message: lib/mempool: Defining dependency "mempool" 00:52:36.576 Message: lib/mbuf: Defining dependency "mbuf" 00:52:36.576 Fetching value of define "__PCLMUL__" : 1 (cached) 00:52:36.576 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:52:36.576 Compiler for C supports arguments -mpclmul: YES 00:52:36.576 Compiler for C supports arguments -maes: YES 00:52:36.576 Compiler for C supports arguments -mavx512f: YES (cached) 00:52:36.576 Compiler for C supports arguments -mavx512bw: YES 00:52:36.576 Compiler for C supports arguments -mavx512dq: YES 00:52:36.576 Compiler for C supports arguments -mavx512vl: YES 00:52:36.576 Compiler for C supports arguments -mvpclmulqdq: YES 00:52:36.576 Compiler for C supports arguments -mavx2: YES 00:52:36.576 Compiler for C supports arguments -mavx: YES 00:52:36.576 Message: lib/net: Defining dependency "net" 00:52:36.576 Message: lib/meter: Defining dependency "meter" 00:52:36.576 Message: lib/ethdev: Defining dependency "ethdev" 00:52:36.576 Message: lib/pci: Defining dependency "pci" 00:52:36.576 Message: lib/cmdline: Defining dependency "cmdline" 00:52:36.576 Message: lib/hash: Defining dependency "hash" 00:52:36.576 Message: lib/timer: Defining dependency "timer" 00:52:36.576 Message: lib/compressdev: Defining dependency "compressdev" 00:52:36.576 Message: lib/cryptodev: Defining dependency "cryptodev" 00:52:36.576 Message: lib/dmadev: Defining dependency "dmadev" 00:52:36.576 Compiler for C supports arguments -Wno-cast-qual: YES 00:52:36.576 Message: lib/power: Defining dependency "power" 00:52:36.576 Message: lib/reorder: Defining dependency "reorder" 00:52:36.576 Message: lib/security: Defining dependency "security" 00:52:36.576 Has header "linux/userfaultfd.h" : YES 00:52:36.576 Has header "linux/vduse.h" : YES 00:52:36.576 Message: lib/vhost: Defining dependency "vhost" 00:52:36.576 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:52:36.576 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:52:36.576 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:52:36.576 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:52:36.576 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:52:36.576 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:52:36.576 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:52:36.576 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:52:36.576 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:52:36.576 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:52:36.576 Program doxygen found: YES (/usr/bin/doxygen) 00:52:36.576 Configuring doxy-api-html.conf using configuration 00:52:36.576 Configuring doxy-api-man.conf using configuration 00:52:36.576 Program mandb found: YES (/usr/bin/mandb) 00:52:36.576 Program sphinx-build found: NO 00:52:36.576 Configuring rte_build_config.h using configuration 00:52:36.576 Message: 00:52:36.576 ================= 00:52:36.576 Applications Enabled 00:52:36.576 ================= 00:52:36.576 00:52:36.576 apps: 00:52:36.576 00:52:36.576 00:52:36.576 Message: 00:52:36.576 ================= 00:52:36.576 Libraries Enabled 00:52:36.576 ================= 00:52:36.576 00:52:36.576 libs: 00:52:36.576 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:52:36.576 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:52:36.576 cryptodev, dmadev, power, reorder, security, vhost, 00:52:36.576 00:52:36.576 Message: 00:52:36.576 =============== 00:52:36.576 Drivers Enabled 00:52:36.576 =============== 00:52:36.576 00:52:36.576 common: 00:52:36.576 00:52:36.576 bus: 00:52:36.576 pci, vdev, 00:52:36.576 mempool: 00:52:36.576 ring, 00:52:36.576 dma: 00:52:36.576 00:52:36.576 net: 00:52:36.576 00:52:36.576 crypto: 00:52:36.576 00:52:36.576 compress: 00:52:36.576 00:52:36.576 vdpa: 00:52:36.576 00:52:36.576 00:52:36.576 Message: 00:52:36.576 ================= 00:52:36.576 Content Skipped 00:52:36.576 ================= 00:52:36.576 00:52:36.576 apps: 00:52:36.576 dumpcap: explicitly disabled via build config 00:52:36.576 graph: explicitly disabled via build config 00:52:36.576 pdump: explicitly disabled via build config 00:52:36.576 proc-info: explicitly disabled via build config 00:52:36.576 test-acl: explicitly disabled via build config 00:52:36.576 test-bbdev: explicitly disabled via build config 00:52:36.576 test-cmdline: explicitly disabled via build config 00:52:36.576 test-compress-perf: explicitly disabled via build config 00:52:36.576 test-crypto-perf: explicitly disabled via build config 00:52:36.576 test-dma-perf: explicitly disabled via build config 00:52:36.576 test-eventdev: explicitly disabled via build config 00:52:36.576 test-fib: explicitly disabled via build config 00:52:36.576 test-flow-perf: explicitly disabled via build config 00:52:36.576 test-gpudev: explicitly disabled via build config 00:52:36.576 test-mldev: explicitly disabled via build config 00:52:36.576 test-pipeline: explicitly disabled via build config 00:52:36.576 test-pmd: explicitly disabled via build config 00:52:36.576 test-regex: explicitly disabled via build config 00:52:36.576 test-sad: explicitly disabled via build config 00:52:36.576 test-security-perf: explicitly disabled via build config 00:52:36.576 00:52:36.576 libs: 00:52:36.576 metrics: explicitly disabled via build config 00:52:36.576 acl: explicitly disabled via build config 00:52:36.576 bbdev: explicitly disabled via build config 00:52:36.576 bitratestats: explicitly disabled via build config 00:52:36.576 bpf: explicitly disabled via build config 00:52:36.576 cfgfile: explicitly disabled via build config 00:52:36.576 distributor: explicitly disabled via build config 00:52:36.576 efd: explicitly disabled via build config 00:52:36.576 eventdev: explicitly disabled via build config 00:52:36.576 dispatcher: explicitly disabled via build config 00:52:36.576 gpudev: explicitly disabled via build config 00:52:36.576 gro: explicitly disabled via build config 00:52:36.576 gso: explicitly disabled via build config 00:52:36.576 ip_frag: explicitly disabled via build config 00:52:36.576 jobstats: explicitly disabled via build config 00:52:36.576 latencystats: explicitly disabled via build config 00:52:36.576 lpm: explicitly disabled via build config 00:52:36.576 member: explicitly disabled via build config 00:52:36.576 pcapng: explicitly disabled via build config 00:52:36.576 rawdev: explicitly disabled via build config 00:52:36.576 regexdev: explicitly disabled via build config 00:52:36.576 mldev: explicitly disabled via build config 00:52:36.576 rib: explicitly disabled via build config 00:52:36.576 sched: explicitly disabled via build config 00:52:36.576 stack: explicitly disabled via build config 00:52:36.576 ipsec: explicitly disabled via build config 00:52:36.576 pdcp: explicitly disabled via build config 00:52:36.576 fib: explicitly disabled via build config 00:52:36.576 port: explicitly disabled via build config 00:52:36.576 pdump: explicitly disabled via build config 00:52:36.576 table: explicitly disabled via build config 00:52:36.576 pipeline: explicitly disabled via build config 00:52:36.576 graph: explicitly disabled via build config 00:52:36.576 node: explicitly disabled via build config 00:52:36.576 00:52:36.576 drivers: 00:52:36.576 common/cpt: not in enabled drivers build config 00:52:36.576 common/dpaax: not in enabled drivers build config 00:52:36.576 common/iavf: not in enabled drivers build config 00:52:36.576 common/idpf: not in enabled drivers build config 00:52:36.576 common/mvep: not in enabled drivers build config 00:52:36.576 common/octeontx: not in enabled drivers build config 00:52:36.576 bus/auxiliary: not in enabled drivers build config 00:52:36.576 bus/cdx: not in enabled drivers build config 00:52:36.576 bus/dpaa: not in enabled drivers build config 00:52:36.577 bus/fslmc: not in enabled drivers build config 00:52:36.577 bus/ifpga: not in enabled drivers build config 00:52:36.577 bus/platform: not in enabled drivers build config 00:52:36.577 bus/vmbus: not in enabled drivers build config 00:52:36.577 common/cnxk: not in enabled drivers build config 00:52:36.577 common/mlx5: not in enabled drivers build config 00:52:36.577 common/nfp: not in enabled drivers build config 00:52:36.577 common/qat: not in enabled drivers build config 00:52:36.577 common/sfc_efx: not in enabled drivers build config 00:52:36.577 mempool/bucket: not in enabled drivers build config 00:52:36.577 mempool/cnxk: not in enabled drivers build config 00:52:36.577 mempool/dpaa: not in enabled drivers build config 00:52:36.577 mempool/dpaa2: not in enabled drivers build config 00:52:36.577 mempool/octeontx: not in enabled drivers build config 00:52:36.577 mempool/stack: not in enabled drivers build config 00:52:36.577 dma/cnxk: not in enabled drivers build config 00:52:36.577 dma/dpaa: not in enabled drivers build config 00:52:36.577 dma/dpaa2: not in enabled drivers build config 00:52:36.577 dma/hisilicon: not in enabled drivers build config 00:52:36.577 dma/idxd: not in enabled drivers build config 00:52:36.577 dma/ioat: not in enabled drivers build config 00:52:36.577 dma/skeleton: not in enabled drivers build config 00:52:36.577 net/af_packet: not in enabled drivers build config 00:52:36.577 net/af_xdp: not in enabled drivers build config 00:52:36.577 net/ark: not in enabled drivers build config 00:52:36.577 net/atlantic: not in enabled drivers build config 00:52:36.577 net/avp: not in enabled drivers build config 00:52:36.577 net/axgbe: not in enabled drivers build config 00:52:36.577 net/bnx2x: not in enabled drivers build config 00:52:36.577 net/bnxt: not in enabled drivers build config 00:52:36.577 net/bonding: not in enabled drivers build config 00:52:36.577 net/cnxk: not in enabled drivers build config 00:52:36.577 net/cpfl: not in enabled drivers build config 00:52:36.577 net/cxgbe: not in enabled drivers build config 00:52:36.577 net/dpaa: not in enabled drivers build config 00:52:36.577 net/dpaa2: not in enabled drivers build config 00:52:36.577 net/e1000: not in enabled drivers build config 00:52:36.577 net/ena: not in enabled drivers build config 00:52:36.577 net/enetc: not in enabled drivers build config 00:52:36.577 net/enetfec: not in enabled drivers build config 00:52:36.577 net/enic: not in enabled drivers build config 00:52:36.577 net/failsafe: not in enabled drivers build config 00:52:36.577 net/fm10k: not in enabled drivers build config 00:52:36.577 net/gve: not in enabled drivers build config 00:52:36.577 net/hinic: not in enabled drivers build config 00:52:36.577 net/hns3: not in enabled drivers build config 00:52:36.577 net/i40e: not in enabled drivers build config 00:52:36.577 net/iavf: not in enabled drivers build config 00:52:36.577 net/ice: not in enabled drivers build config 00:52:36.577 net/idpf: not in enabled drivers build config 00:52:36.577 net/igc: not in enabled drivers build config 00:52:36.577 net/ionic: not in enabled drivers build config 00:52:36.577 net/ipn3ke: not in enabled drivers build config 00:52:36.577 net/ixgbe: not in enabled drivers build config 00:52:36.577 net/mana: not in enabled drivers build config 00:52:36.577 net/memif: not in enabled drivers build config 00:52:36.577 net/mlx4: not in enabled drivers build config 00:52:36.577 net/mlx5: not in enabled drivers build config 00:52:36.577 net/mvneta: not in enabled drivers build config 00:52:36.577 net/mvpp2: not in enabled drivers build config 00:52:36.577 net/netvsc: not in enabled drivers build config 00:52:36.577 net/nfb: not in enabled drivers build config 00:52:36.577 net/nfp: not in enabled drivers build config 00:52:36.577 net/ngbe: not in enabled drivers build config 00:52:36.577 net/null: not in enabled drivers build config 00:52:36.577 net/octeontx: not in enabled drivers build config 00:52:36.577 net/octeon_ep: not in enabled drivers build config 00:52:36.577 net/pcap: not in enabled drivers build config 00:52:36.577 net/pfe: not in enabled drivers build config 00:52:36.577 net/qede: not in enabled drivers build config 00:52:36.577 net/ring: not in enabled drivers build config 00:52:36.577 net/sfc: not in enabled drivers build config 00:52:36.577 net/softnic: not in enabled drivers build config 00:52:36.577 net/tap: not in enabled drivers build config 00:52:36.577 net/thunderx: not in enabled drivers build config 00:52:36.577 net/txgbe: not in enabled drivers build config 00:52:36.577 net/vdev_netvsc: not in enabled drivers build config 00:52:36.577 net/vhost: not in enabled drivers build config 00:52:36.577 net/virtio: not in enabled drivers build config 00:52:36.577 net/vmxnet3: not in enabled drivers build config 00:52:36.577 raw/*: missing internal dependency, "rawdev" 00:52:36.577 crypto/armv8: not in enabled drivers build config 00:52:36.577 crypto/bcmfs: not in enabled drivers build config 00:52:36.577 crypto/caam_jr: not in enabled drivers build config 00:52:36.577 crypto/ccp: not in enabled drivers build config 00:52:36.577 crypto/cnxk: not in enabled drivers build config 00:52:36.577 crypto/dpaa_sec: not in enabled drivers build config 00:52:36.577 crypto/dpaa2_sec: not in enabled drivers build config 00:52:36.577 crypto/ipsec_mb: not in enabled drivers build config 00:52:36.577 crypto/mlx5: not in enabled drivers build config 00:52:36.577 crypto/mvsam: not in enabled drivers build config 00:52:36.577 crypto/nitrox: not in enabled drivers build config 00:52:36.577 crypto/null: not in enabled drivers build config 00:52:36.577 crypto/octeontx: not in enabled drivers build config 00:52:36.577 crypto/openssl: not in enabled drivers build config 00:52:36.577 crypto/scheduler: not in enabled drivers build config 00:52:36.577 crypto/uadk: not in enabled drivers build config 00:52:36.577 crypto/virtio: not in enabled drivers build config 00:52:36.577 compress/isal: not in enabled drivers build config 00:52:36.577 compress/mlx5: not in enabled drivers build config 00:52:36.577 compress/octeontx: not in enabled drivers build config 00:52:36.577 compress/zlib: not in enabled drivers build config 00:52:36.577 regex/*: missing internal dependency, "regexdev" 00:52:36.577 ml/*: missing internal dependency, "mldev" 00:52:36.577 vdpa/ifc: not in enabled drivers build config 00:52:36.577 vdpa/mlx5: not in enabled drivers build config 00:52:36.577 vdpa/nfp: not in enabled drivers build config 00:52:36.577 vdpa/sfc: not in enabled drivers build config 00:52:36.577 event/*: missing internal dependency, "eventdev" 00:52:36.577 baseband/*: missing internal dependency, "bbdev" 00:52:36.577 gpu/*: missing internal dependency, "gpudev" 00:52:36.577 00:52:36.577 00:52:36.577 Build targets in project: 85 00:52:36.577 00:52:36.577 DPDK 23.11.0 00:52:36.577 00:52:36.577 User defined options 00:52:36.577 default_library : static 00:52:36.577 libdir : lib 00:52:36.577 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:52:36.577 b_lto : true 00:52:36.577 b_sanitize : address 00:52:36.577 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:52:36.577 c_link_args : 00:52:36.577 cpu_instruction_set: native 00:52:36.577 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:52:36.577 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:52:36.577 enable_docs : false 00:52:36.577 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:52:36.577 enable_kmods : false 00:52:36.577 tests : false 00:52:36.577 00:52:36.577 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:52:37.512 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:52:37.512 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:52:37.512 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:52:37.512 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:52:37.512 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:52:37.512 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:52:37.512 [6/265] Linking static target lib/librte_kvargs.a 00:52:37.512 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:52:37.512 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:52:37.512 [9/265] Linking static target lib/librte_log.a 00:52:37.769 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:52:37.769 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:52:38.027 [12/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:52:38.027 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:52:38.027 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:52:38.285 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:52:38.285 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:52:38.543 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:52:38.543 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:52:38.543 [19/265] Linking target lib/librte_log.so.24.0 00:52:38.543 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:52:38.543 [21/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:52:38.815 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:52:38.815 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:52:38.815 [24/265] Linking target lib/librte_kvargs.so.24.0 00:52:39.079 [25/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:52:39.079 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:52:39.079 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:52:39.079 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:52:39.336 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:52:39.336 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:52:39.595 [31/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:52:39.595 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:52:39.595 [33/265] Linking static target lib/librte_telemetry.a 00:52:39.595 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:52:39.853 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:52:39.853 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:52:39.853 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:52:39.853 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:52:39.853 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:52:39.853 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:52:39.853 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:52:39.853 [42/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:52:40.420 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:52:40.420 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:52:40.678 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:52:40.678 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:52:40.678 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:52:40.936 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:52:40.936 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:52:40.936 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:52:41.256 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:52:41.256 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:52:41.256 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:52:41.256 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:52:41.515 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:52:41.515 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:52:41.515 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:52:41.515 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:52:41.772 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:52:41.772 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:52:41.772 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:52:41.772 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:52:41.773 [63/265] Linking target lib/librte_telemetry.so.24.0 00:52:42.030 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:52:42.031 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:52:42.031 [66/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:52:42.031 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:52:42.031 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:52:42.596 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:52:42.596 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:52:42.596 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:52:42.596 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:52:42.596 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:52:42.596 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:52:42.596 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:52:42.596 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:52:42.853 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:52:42.853 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:52:43.419 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:52:43.419 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:52:43.419 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:52:43.419 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:52:43.419 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:52:43.419 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:52:43.419 [85/265] Linking static target lib/librte_ring.a 00:52:43.677 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:52:43.677 [87/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:52:43.935 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:52:43.935 [89/265] Linking static target lib/librte_eal.a 00:52:44.193 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:52:44.193 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:52:44.193 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:52:44.193 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:52:44.193 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:52:44.451 [95/265] Linking static target lib/librte_mempool.a 00:52:44.451 [96/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:52:44.451 [97/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:52:44.709 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:52:44.967 [99/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:52:44.967 [100/265] Linking static target lib/librte_rcu.a 00:52:44.967 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:52:44.967 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:52:44.967 [103/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:52:45.225 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:52:45.225 [105/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:52:45.225 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:52:45.483 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:52:45.483 [108/265] Linking static target lib/librte_net.a 00:52:45.483 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:52:45.483 [110/265] Linking static target lib/librte_meter.a 00:52:45.740 [111/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:52:45.740 [112/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:52:45.997 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:52:45.997 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:52:45.997 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:52:46.254 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:52:46.819 [117/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:52:46.819 [118/265] Linking static target lib/librte_mbuf.a 00:52:46.819 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:52:47.077 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:52:47.077 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:52:47.335 [122/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:52:47.593 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:52:47.850 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:52:47.850 [125/265] Linking static target lib/librte_pci.a 00:52:47.850 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:52:47.850 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:52:48.108 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:52:48.108 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:52:48.108 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:52:48.108 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:52:48.108 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:52:48.366 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:52:48.366 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:52:48.366 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:52:48.366 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:52:48.366 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:52:48.366 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:52:48.622 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:52:48.622 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:52:48.879 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:52:48.879 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:52:48.879 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:52:48.879 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:52:48.879 [145/265] Linking static target lib/librte_cmdline.a 00:52:49.810 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:52:49.810 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:52:49.810 [148/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:52:50.068 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:52:50.068 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:52:50.068 [151/265] Linking static target lib/librte_timer.a 00:52:50.324 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:52:50.581 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:52:50.581 [154/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:52:50.581 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:52:50.581 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:52:50.581 [157/265] Linking static target lib/librte_compressdev.a 00:52:50.838 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:52:50.838 [159/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:52:51.096 [160/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:52:51.096 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:52:51.371 [162/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:52:51.371 [163/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:52:51.653 [164/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:52:51.653 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:52:51.911 [166/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:52:51.911 [167/265] Linking static target lib/librte_dmadev.a 00:52:51.911 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:52:52.476 [169/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:52:52.734 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:52:52.734 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:52:52.991 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:52:53.248 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:52:53.248 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:52:54.663 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:52:54.663 [176/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:52:54.663 [177/265] Linking static target lib/librte_cryptodev.a 00:52:54.663 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:52:54.663 [179/265] Linking static target lib/librte_power.a 00:52:54.920 [180/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:52:54.920 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:52:54.920 [182/265] Linking static target lib/librte_security.a 00:52:55.177 [183/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:52:55.177 [184/265] Linking static target lib/librte_reorder.a 00:52:55.435 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:52:55.694 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:52:55.694 [187/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:52:55.694 [188/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:52:55.694 [189/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:52:55.962 [190/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:52:55.962 [191/265] Linking static target lib/librte_ethdev.a 00:52:56.896 [192/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:52:56.896 [193/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:52:56.896 [194/265] Linking static target lib/librte_hash.a 00:52:57.154 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:52:57.154 [196/265] Linking target lib/librte_eal.so.24.0 00:52:57.154 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:52:57.412 [198/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:52:57.412 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:52:57.670 [200/265] Linking target lib/librte_ring.so.24.0 00:52:57.670 [201/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:52:57.670 [202/265] Linking target lib/librte_meter.so.24.0 00:52:57.952 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:52:57.952 [204/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:52:58.212 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:52:58.212 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:52:58.212 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:52:58.212 [208/265] Linking target lib/librte_pci.so.24.0 00:52:58.470 [209/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:52:58.728 [210/265] Linking target lib/librte_timer.so.24.0 00:52:58.984 [211/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:52:59.242 [212/265] Linking target lib/librte_mempool.so.24.0 00:52:59.242 [213/265] Linking target lib/librte_rcu.so.24.0 00:52:59.242 [214/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:52:59.500 [215/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:52:59.500 [216/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:52:59.500 [217/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:52:59.758 [218/265] Linking target lib/librte_dmadev.so.24.0 00:52:59.758 [219/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:52:59.758 [220/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:52:59.758 [221/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:53:00.366 [222/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:53:00.366 [223/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:53:00.624 [224/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:53:00.624 [225/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:53:00.624 [226/265] Linking static target drivers/librte_bus_vdev.a 00:53:00.624 [227/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:53:00.624 [228/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:53:00.882 [229/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:53:00.882 [230/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:53:01.140 [231/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:53:01.140 [232/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:53:01.140 [233/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:53:01.140 [234/265] Linking static target drivers/librte_bus_pci.a 00:53:01.401 [235/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:53:01.401 [236/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:53:01.401 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:53:01.660 [238/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:53:01.660 [239/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:53:01.660 [240/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:53:01.660 [241/265] Linking static target drivers/librte_mempool_ring.a 00:53:01.660 [242/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:53:02.254 [243/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:53:02.254 [244/265] Linking target drivers/librte_mempool_ring.so.24.0 00:53:02.512 [245/265] Linking target lib/librte_mbuf.so.24.0 00:53:02.512 [246/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:53:03.078 [247/265] Linking target lib/librte_reorder.so.24.0 00:53:03.337 [248/265] Linking target lib/librte_compressdev.so.24.0 00:53:03.595 [249/265] Linking target drivers/librte_bus_pci.so.24.0 00:53:03.854 [250/265] Linking target lib/librte_net.so.24.0 00:53:04.113 [251/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:53:06.029 [252/265] Linking target lib/librte_cmdline.so.24.0 00:53:06.029 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:53:06.029 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:53:06.595 [255/265] Linking target lib/librte_security.so.24.0 00:53:09.125 [256/265] Linking target lib/librte_ethdev.so.24.0 00:53:09.125 [257/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:53:10.106 [258/265] Linking target lib/librte_hash.so.24.0 00:53:10.106 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:53:12.635 [260/265] Linking target lib/librte_power.so.24.0 00:53:15.920 [261/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:53:48.027 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:53:48.027 [263/265] Linking static target lib/librte_vhost.a 00:53:48.027 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:54:06.106 [265/265] Linking target lib/librte_vhost.so.24.0 00:54:06.106 INFO: autodetecting backend as ninja 00:54:06.106 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:54:06.106 CC lib/ut_mock/mock.o 00:54:06.106 CC lib/log/log.o 00:54:06.106 CC lib/log/log_flags.o 00:54:06.106 CC lib/log/log_deprecated.o 00:54:06.106 CC lib/ut/ut.o 00:54:06.106 LIB libspdk_ut_mock.a 00:54:06.106 LIB libspdk_log.a 00:54:06.107 LIB libspdk_ut.a 00:54:06.107 CC lib/util/base64.o 00:54:06.107 CC lib/util/bit_array.o 00:54:06.107 CC lib/util/cpuset.o 00:54:06.107 CC lib/dma/dma.o 00:54:06.107 CC lib/util/crc16.o 00:54:06.107 CC lib/util/crc32.o 00:54:06.107 CC lib/util/crc32c.o 00:54:06.107 CXX lib/trace_parser/trace.o 00:54:06.107 CC lib/ioat/ioat.o 00:54:06.107 CC lib/vfio_user/host/vfio_user_pci.o 00:54:06.107 CC lib/util/crc32_ieee.o 00:54:06.107 CC lib/util/crc64.o 00:54:06.107 CC lib/util/dif.o 00:54:06.107 LIB libspdk_dma.a 00:54:06.107 CC lib/util/fd.o 00:54:06.107 CC lib/util/file.o 00:54:06.107 LIB libspdk_ioat.a 00:54:06.107 CC lib/util/hexlify.o 00:54:06.107 CC lib/util/iov.o 00:54:06.107 CC lib/util/math.o 00:54:06.107 CC lib/vfio_user/host/vfio_user.o 00:54:06.107 CC lib/util/pipe.o 00:54:06.107 CC lib/util/strerror_tls.o 00:54:06.107 CC lib/util/string.o 00:54:06.107 CC lib/util/uuid.o 00:54:06.107 CC lib/util/fd_group.o 00:54:06.107 CC lib/util/xor.o 00:54:06.107 CC lib/util/zipf.o 00:54:06.107 LIB libspdk_vfio_user.a 00:54:06.107 LIB libspdk_util.a 00:54:06.365 CC lib/rdma/common.o 00:54:06.365 CC lib/rdma/rdma_verbs.o 00:54:06.365 CC lib/json/json_parse.o 00:54:06.365 CC lib/json/json_util.o 00:54:06.365 CC lib/json/json_write.o 00:54:06.365 CC lib/conf/conf.o 00:54:06.365 CC lib/idxd/idxd.o 00:54:06.365 CC lib/env_dpdk/env.o 00:54:06.365 CC lib/vmd/vmd.o 00:54:06.365 LIB libspdk_trace_parser.a 00:54:06.623 CC lib/vmd/led.o 00:54:06.623 CC lib/env_dpdk/memory.o 00:54:06.623 LIB libspdk_conf.a 00:54:06.623 CC lib/env_dpdk/pci.o 00:54:06.623 CC lib/idxd/idxd_user.o 00:54:06.623 CC lib/env_dpdk/init.o 00:54:06.623 LIB libspdk_rdma.a 00:54:06.623 CC lib/idxd/idxd_kernel.o 00:54:06.623 LIB libspdk_json.a 00:54:06.624 CC lib/env_dpdk/threads.o 00:54:06.624 CC lib/env_dpdk/pci_ioat.o 00:54:06.879 CC lib/env_dpdk/pci_virtio.o 00:54:06.879 CC lib/env_dpdk/pci_vmd.o 00:54:06.879 LIB libspdk_idxd.a 00:54:06.879 CC lib/env_dpdk/pci_idxd.o 00:54:06.879 CC lib/env_dpdk/pci_event.o 00:54:06.879 CC lib/env_dpdk/sigbus_handler.o 00:54:06.879 CC lib/env_dpdk/pci_dpdk.o 00:54:06.879 CC lib/jsonrpc/jsonrpc_server.o 00:54:06.879 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:54:06.879 CC lib/env_dpdk/pci_dpdk_2207.o 00:54:06.879 CC lib/jsonrpc/jsonrpc_client.o 00:54:06.879 LIB libspdk_vmd.a 00:54:07.136 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:54:07.136 CC lib/env_dpdk/pci_dpdk_2211.o 00:54:07.136 LIB libspdk_jsonrpc.a 00:54:07.394 CC lib/rpc/rpc.o 00:54:07.651 LIB libspdk_rpc.a 00:54:07.909 CC lib/trace/trace.o 00:54:07.909 CC lib/trace/trace_rpc.o 00:54:07.909 CC lib/trace/trace_flags.o 00:54:07.909 CC lib/sock/sock.o 00:54:07.909 CC lib/sock/sock_rpc.o 00:54:07.909 CC lib/notify/notify.o 00:54:07.909 CC lib/notify/notify_rpc.o 00:54:07.909 LIB libspdk_notify.a 00:54:07.909 LIB libspdk_env_dpdk.a 00:54:08.167 LIB libspdk_trace.a 00:54:08.167 LIB libspdk_sock.a 00:54:08.167 CC lib/thread/thread.o 00:54:08.167 CC lib/thread/iobuf.o 00:54:08.426 CC lib/nvme/nvme_ctrlr.o 00:54:08.426 CC lib/nvme/nvme_ctrlr_cmd.o 00:54:08.426 CC lib/nvme/nvme_fabric.o 00:54:08.426 CC lib/nvme/nvme_ns_cmd.o 00:54:08.426 CC lib/nvme/nvme_ns.o 00:54:08.426 CC lib/nvme/nvme_qpair.o 00:54:08.426 CC lib/nvme/nvme_pcie.o 00:54:08.426 CC lib/nvme/nvme_pcie_common.o 00:54:08.426 CC lib/nvme/nvme.o 00:54:09.362 LIB libspdk_thread.a 00:54:09.362 CC lib/nvme/nvme_quirks.o 00:54:09.362 CC lib/nvme/nvme_transport.o 00:54:09.362 CC lib/nvme/nvme_discovery.o 00:54:09.362 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:54:09.362 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:54:09.362 CC lib/nvme/nvme_tcp.o 00:54:09.362 CC lib/accel/accel.o 00:54:09.620 CC lib/nvme/nvme_opal.o 00:54:09.620 CC lib/nvme/nvme_io_msg.o 00:54:09.879 CC lib/nvme/nvme_poll_group.o 00:54:10.137 CC lib/accel/accel_rpc.o 00:54:10.137 CC lib/nvme/nvme_zns.o 00:54:10.137 CC lib/nvme/nvme_cuse.o 00:54:10.137 CC lib/nvme/nvme_vfio_user.o 00:54:10.396 CC lib/blob/blobstore.o 00:54:10.396 CC lib/accel/accel_sw.o 00:54:10.396 CC lib/init/json_config.o 00:54:10.396 CC lib/init/subsystem.o 00:54:10.654 CC lib/init/subsystem_rpc.o 00:54:10.654 CC lib/blob/request.o 00:54:10.654 CC lib/nvme/nvme_rdma.o 00:54:10.654 LIB libspdk_accel.a 00:54:10.654 CC lib/blob/zeroes.o 00:54:10.654 CC lib/init/rpc.o 00:54:10.654 CC lib/blob/blob_bs_dev.o 00:54:10.912 LIB libspdk_init.a 00:54:10.912 CC lib/virtio/virtio.o 00:54:10.912 CC lib/virtio/virtio_vhost_user.o 00:54:10.912 CC lib/virtio/virtio_vfio_user.o 00:54:10.912 CC lib/virtio/virtio_pci.o 00:54:11.171 CC lib/bdev/bdev.o 00:54:11.171 CC lib/event/app.o 00:54:11.171 CC lib/event/reactor.o 00:54:11.171 CC lib/bdev/bdev_rpc.o 00:54:11.171 CC lib/bdev/bdev_zone.o 00:54:11.171 CC lib/bdev/part.o 00:54:11.171 LIB libspdk_virtio.a 00:54:11.171 CC lib/event/log_rpc.o 00:54:11.171 CC lib/event/app_rpc.o 00:54:11.428 CC lib/bdev/scsi_nvme.o 00:54:11.428 CC lib/event/scheduler_static.o 00:54:11.428 LIB libspdk_event.a 00:54:11.994 LIB libspdk_nvme.a 00:54:11.994 LIB libspdk_blob.a 00:54:12.253 CC lib/lvol/lvol.o 00:54:12.253 CC lib/blobfs/blobfs.o 00:54:12.253 CC lib/blobfs/tree.o 00:54:12.819 LIB libspdk_blobfs.a 00:54:12.819 LIB libspdk_bdev.a 00:54:12.819 LIB libspdk_lvol.a 00:54:13.077 CC lib/ublk/ublk.o 00:54:13.077 CC lib/nbd/nbd.o 00:54:13.077 CC lib/nbd/nbd_rpc.o 00:54:13.077 CC lib/ublk/ublk_rpc.o 00:54:13.077 CC lib/scsi/dev.o 00:54:13.077 CC lib/scsi/lun.o 00:54:13.077 CC lib/scsi/port.o 00:54:13.077 CC lib/nvmf/ctrlr.o 00:54:13.077 CC lib/scsi/scsi.o 00:54:13.077 CC lib/ftl/ftl_core.o 00:54:13.077 CC lib/ftl/ftl_init.o 00:54:13.334 CC lib/scsi/scsi_bdev.o 00:54:13.334 CC lib/nvmf/ctrlr_discovery.o 00:54:13.334 CC lib/scsi/scsi_pr.o 00:54:13.334 CC lib/nvmf/ctrlr_bdev.o 00:54:13.334 CC lib/nvmf/subsystem.o 00:54:13.334 LIB libspdk_nbd.a 00:54:13.334 CC lib/nvmf/nvmf.o 00:54:13.334 CC lib/ftl/ftl_layout.o 00:54:13.334 CC lib/nvmf/nvmf_rpc.o 00:54:13.334 CC lib/nvmf/transport.o 00:54:13.592 LIB libspdk_ublk.a 00:54:13.592 CC lib/nvmf/tcp.o 00:54:13.592 CC lib/nvmf/rdma.o 00:54:13.592 CC lib/scsi/scsi_rpc.o 00:54:13.592 CC lib/ftl/ftl_debug.o 00:54:13.592 CC lib/ftl/ftl_io.o 00:54:13.851 CC lib/scsi/task.o 00:54:13.851 CC lib/ftl/ftl_sb.o 00:54:13.851 CC lib/ftl/ftl_l2p.o 00:54:13.851 CC lib/ftl/ftl_l2p_flat.o 00:54:13.851 CC lib/ftl/ftl_nv_cache.o 00:54:13.851 CC lib/ftl/ftl_band.o 00:54:13.851 CC lib/ftl/ftl_band_ops.o 00:54:13.851 LIB libspdk_scsi.a 00:54:13.851 CC lib/ftl/ftl_writer.o 00:54:14.109 CC lib/ftl/ftl_rq.o 00:54:14.109 CC lib/ftl/ftl_reloc.o 00:54:14.109 CC lib/ftl/ftl_l2p_cache.o 00:54:14.109 CC lib/ftl/ftl_p2l.o 00:54:14.109 CC lib/ftl/mngt/ftl_mngt.o 00:54:14.109 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:54:14.109 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:54:14.109 CC lib/ftl/mngt/ftl_mngt_startup.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_md.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_misc.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_band.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:54:14.366 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:54:14.624 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:54:14.624 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:54:14.624 CC lib/ftl/utils/ftl_conf.o 00:54:14.624 CC lib/ftl/utils/ftl_md.o 00:54:14.624 CC lib/ftl/utils/ftl_mempool.o 00:54:14.624 CC lib/ftl/utils/ftl_bitmap.o 00:54:14.625 LIB libspdk_nvmf.a 00:54:14.625 CC lib/ftl/utils/ftl_property.o 00:54:14.625 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:54:14.625 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:54:14.882 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:54:14.882 CC lib/iscsi/conn.o 00:54:14.882 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:54:14.882 CC lib/vhost/vhost.o 00:54:14.882 CC lib/vhost/vhost_rpc.o 00:54:14.882 CC lib/vhost/vhost_scsi.o 00:54:14.882 CC lib/vhost/vhost_blk.o 00:54:14.882 CC lib/iscsi/init_grp.o 00:54:14.882 CC lib/vhost/rte_vhost_user.o 00:54:14.882 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:54:14.882 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:54:14.882 CC lib/ftl/upgrade/ftl_sb_v3.o 00:54:15.140 CC lib/iscsi/iscsi.o 00:54:15.140 CC lib/ftl/upgrade/ftl_sb_v5.o 00:54:15.140 CC lib/ftl/nvc/ftl_nvc_dev.o 00:54:15.141 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:54:15.141 CC lib/iscsi/md5.o 00:54:15.399 CC lib/ftl/base/ftl_base_dev.o 00:54:15.399 CC lib/iscsi/param.o 00:54:15.399 CC lib/iscsi/portal_grp.o 00:54:15.399 CC lib/iscsi/tgt_node.o 00:54:15.657 CC lib/ftl/base/ftl_base_bdev.o 00:54:15.657 CC lib/iscsi/iscsi_subsystem.o 00:54:15.657 CC lib/iscsi/iscsi_rpc.o 00:54:15.657 CC lib/iscsi/task.o 00:54:15.914 LIB libspdk_ftl.a 00:54:16.171 LIB libspdk_iscsi.a 00:54:16.171 LIB libspdk_vhost.a 00:54:16.428 CC module/env_dpdk/env_dpdk_rpc.o 00:54:16.428 CC module/accel/iaa/accel_iaa.o 00:54:16.428 CC module/accel/error/accel_error.o 00:54:16.428 CC module/accel/ioat/accel_ioat.o 00:54:16.428 CC module/scheduler/gscheduler/gscheduler.o 00:54:16.428 CC module/accel/dsa/accel_dsa.o 00:54:16.428 CC module/scheduler/dynamic/scheduler_dynamic.o 00:54:16.428 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:54:16.428 CC module/sock/posix/posix.o 00:54:16.428 CC module/blob/bdev/blob_bdev.o 00:54:16.428 LIB libspdk_env_dpdk_rpc.a 00:54:16.428 CC module/accel/dsa/accel_dsa_rpc.o 00:54:16.428 LIB libspdk_scheduler_dpdk_governor.a 00:54:16.428 CC module/accel/ioat/accel_ioat_rpc.o 00:54:16.428 CC module/accel/iaa/accel_iaa_rpc.o 00:54:16.428 LIB libspdk_scheduler_gscheduler.a 00:54:16.428 CC module/accel/error/accel_error_rpc.o 00:54:16.428 LIB libspdk_scheduler_dynamic.a 00:54:16.687 LIB libspdk_blob_bdev.a 00:54:16.687 LIB libspdk_accel_ioat.a 00:54:16.687 LIB libspdk_accel_iaa.a 00:54:16.687 LIB libspdk_accel_dsa.a 00:54:16.687 LIB libspdk_accel_error.a 00:54:16.687 CC module/bdev/lvol/vbdev_lvol.o 00:54:16.687 CC module/bdev/null/bdev_null.o 00:54:16.687 CC module/bdev/delay/vbdev_delay.o 00:54:16.687 CC module/bdev/nvme/bdev_nvme.o 00:54:16.687 CC module/bdev/gpt/gpt.o 00:54:16.687 CC module/bdev/malloc/bdev_malloc.o 00:54:16.687 CC module/blobfs/bdev/blobfs_bdev.o 00:54:16.687 CC module/bdev/error/vbdev_error.o 00:54:16.945 CC module/bdev/passthru/vbdev_passthru.o 00:54:16.945 LIB libspdk_sock_posix.a 00:54:16.945 CC module/bdev/error/vbdev_error_rpc.o 00:54:16.945 CC module/bdev/gpt/vbdev_gpt.o 00:54:16.945 CC module/bdev/null/bdev_null_rpc.o 00:54:16.945 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:54:17.204 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:54:17.204 CC module/bdev/malloc/bdev_malloc_rpc.o 00:54:17.204 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:54:17.204 CC module/bdev/delay/vbdev_delay_rpc.o 00:54:17.204 LIB libspdk_bdev_error.a 00:54:17.204 LIB libspdk_bdev_null.a 00:54:17.204 LIB libspdk_blobfs_bdev.a 00:54:17.204 CC module/bdev/raid/bdev_raid.o 00:54:17.204 LIB libspdk_bdev_gpt.a 00:54:17.204 CC module/bdev/split/vbdev_split.o 00:54:17.204 LIB libspdk_bdev_malloc.a 00:54:17.204 LIB libspdk_bdev_passthru.a 00:54:17.204 LIB libspdk_bdev_delay.a 00:54:17.204 CC module/bdev/zone_block/vbdev_zone_block.o 00:54:17.204 CC module/bdev/aio/bdev_aio.o 00:54:17.462 CC module/bdev/split/vbdev_split_rpc.o 00:54:17.462 CC module/bdev/ftl/bdev_ftl.o 00:54:17.462 LIB libspdk_bdev_lvol.a 00:54:17.462 CC module/bdev/iscsi/bdev_iscsi.o 00:54:17.462 CC module/bdev/nvme/bdev_nvme_rpc.o 00:54:17.462 CC module/bdev/virtio/bdev_virtio_scsi.o 00:54:17.462 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:54:17.462 LIB libspdk_bdev_split.a 00:54:17.462 CC module/bdev/raid/bdev_raid_rpc.o 00:54:17.720 CC module/bdev/ftl/bdev_ftl_rpc.o 00:54:17.720 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:54:17.720 CC module/bdev/aio/bdev_aio_rpc.o 00:54:17.720 CC module/bdev/raid/bdev_raid_sb.o 00:54:17.720 LIB libspdk_bdev_iscsi.a 00:54:17.720 CC module/bdev/nvme/nvme_rpc.o 00:54:17.720 CC module/bdev/raid/raid0.o 00:54:17.720 LIB libspdk_bdev_zone_block.a 00:54:17.720 LIB libspdk_bdev_aio.a 00:54:17.720 CC module/bdev/raid/raid1.o 00:54:17.720 CC module/bdev/raid/concat.o 00:54:17.720 CC module/bdev/virtio/bdev_virtio_blk.o 00:54:17.720 CC module/bdev/virtio/bdev_virtio_rpc.o 00:54:17.720 LIB libspdk_bdev_ftl.a 00:54:17.978 CC module/bdev/nvme/bdev_mdns_client.o 00:54:17.978 CC module/bdev/raid/raid5f.o 00:54:17.978 CC module/bdev/nvme/vbdev_opal.o 00:54:17.978 CC module/bdev/nvme/vbdev_opal_rpc.o 00:54:17.978 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:54:17.978 LIB libspdk_bdev_virtio.a 00:54:18.236 LIB libspdk_bdev_raid.a 00:54:18.236 LIB libspdk_bdev_nvme.a 00:54:18.803 CC module/event/subsystems/iobuf/iobuf.o 00:54:18.803 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:54:18.803 CC module/event/subsystems/sock/sock.o 00:54:18.803 CC module/event/subsystems/scheduler/scheduler.o 00:54:18.803 CC module/event/subsystems/vmd/vmd.o 00:54:18.803 CC module/event/subsystems/vmd/vmd_rpc.o 00:54:18.803 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:54:18.803 LIB libspdk_event_sock.a 00:54:18.803 LIB libspdk_event_vmd.a 00:54:18.803 LIB libspdk_event_iobuf.a 00:54:18.803 LIB libspdk_event_vhost_blk.a 00:54:18.803 LIB libspdk_event_scheduler.a 00:54:19.071 CC module/event/subsystems/accel/accel.o 00:54:19.071 LIB libspdk_event_accel.a 00:54:19.329 CC module/event/subsystems/bdev/bdev.o 00:54:19.587 LIB libspdk_event_bdev.a 00:54:19.587 CC module/event/subsystems/scsi/scsi.o 00:54:19.587 CC module/event/subsystems/nbd/nbd.o 00:54:19.587 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:54:19.587 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:54:19.587 CC module/event/subsystems/ublk/ublk.o 00:54:19.845 LIB libspdk_event_nbd.a 00:54:19.845 LIB libspdk_event_ublk.a 00:54:19.845 LIB libspdk_event_scsi.a 00:54:19.845 LIB libspdk_event_nvmf.a 00:54:19.845 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:54:19.845 CC module/event/subsystems/iscsi/iscsi.o 00:54:20.103 LIB libspdk_event_vhost_scsi.a 00:54:20.103 LIB libspdk_event_iscsi.a 00:54:20.361 CXX app/trace/trace.o 00:54:20.361 CC examples/nvme/hello_world/hello_world.o 00:54:20.361 CC examples/ioat/perf/perf.o 00:54:20.361 CC examples/accel/perf/accel_perf.o 00:54:20.361 CC examples/sock/hello_world/hello_sock.o 00:54:20.361 CC examples/bdev/hello_world/hello_bdev.o 00:54:20.361 CC test/accel/dif/dif.o 00:54:20.361 CC examples/blob/hello_world/hello_blob.o 00:54:20.361 CC test/app/bdev_svc/bdev_svc.o 00:54:20.361 CC test/bdev/bdevio/bdevio.o 00:54:20.619 LINK ioat_perf 00:54:20.619 LINK bdev_svc 00:54:20.619 LINK hello_world 00:54:20.619 LINK hello_sock 00:54:20.619 LINK hello_bdev 00:54:20.877 LINK dif 00:54:20.877 LINK hello_blob 00:54:20.877 LINK spdk_trace 00:54:20.877 LINK accel_perf 00:54:20.877 LINK bdevio 00:54:33.078 CC app/trace_record/trace_record.o 00:54:34.015 LINK spdk_trace_record 00:54:52.093 CC examples/ioat/verify/verify.o 00:54:52.094 LINK verify 00:55:14.020 CC examples/vmd/lsvmd/lsvmd.o 00:55:14.020 LINK lsvmd 00:55:15.391 CC app/nvmf_tgt/nvmf_main.o 00:55:16.325 LINK nvmf_tgt 00:55:26.330 CC examples/nvme/reconnect/reconnect.o 00:55:26.330 CC examples/nvme/nvme_manage/nvme_manage.o 00:55:28.231 LINK reconnect 00:55:29.166 LINK nvme_manage 00:55:33.353 CC app/iscsi_tgt/iscsi_tgt.o 00:55:34.729 LINK iscsi_tgt 00:56:42.485 CC examples/vmd/led/led.o 00:56:42.485 LINK led 00:57:00.588 CC examples/nvme/arbitration/arbitration.o 00:57:01.154 LINK arbitration 00:57:07.707 CC app/spdk_tgt/spdk_tgt.o 00:57:09.610 LINK spdk_tgt 00:57:31.532 CC examples/bdev/bdevperf/bdevperf.o 00:57:31.790 CC app/spdk_lspci/spdk_lspci.o 00:57:32.047 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:57:32.304 LINK spdk_lspci 00:57:32.870 CC test/app/histogram_perf/histogram_perf.o 00:57:33.435 LINK bdevperf 00:57:33.435 LINK histogram_perf 00:57:33.693 LINK nvme_fuzz 00:57:35.593 CC examples/blob/cli/blobcli.o 00:57:36.967 LINK blobcli 00:57:37.899 CC test/blobfs/mkfs/mkfs.o 00:57:38.844 LINK mkfs 00:58:17.591 CC examples/nvme/hotplug/hotplug.o 00:58:17.591 LINK hotplug 00:58:17.591 TEST_HEADER include/spdk/config.h 00:58:17.591 CXX test/cpp_headers/accel.o 00:58:18.525 CXX test/cpp_headers/accel_module.o 00:58:20.423 CXX test/cpp_headers/assert.o 00:58:22.322 CXX test/cpp_headers/barrier.o 00:58:24.221 CXX test/cpp_headers/base64.o 00:58:26.124 CXX test/cpp_headers/bdev.o 00:58:28.025 CXX test/cpp_headers/bdev_module.o 00:58:30.556 CXX test/cpp_headers/bdev_zone.o 00:58:32.465 CXX test/cpp_headers/bit_array.o 00:58:34.367 CXX test/cpp_headers/bit_pool.o 00:58:36.290 CXX test/cpp_headers/blob.o 00:58:38.195 CXX test/cpp_headers/blob_bdev.o 00:58:40.725 CXX test/cpp_headers/blobfs.o 00:58:41.660 CXX test/cpp_headers/blobfs_bdev.o 00:58:43.561 CXX test/cpp_headers/conf.o 00:58:45.464 CXX test/cpp_headers/config.o 00:58:45.464 CXX test/cpp_headers/cpuset.o 00:58:47.404 CXX test/cpp_headers/crc16.o 00:58:49.301 CXX test/cpp_headers/crc32.o 00:58:49.558 CC app/spdk_nvme_perf/perf.o 00:58:50.931 CXX test/cpp_headers/crc64.o 00:58:52.304 CXX test/cpp_headers/dif.o 00:58:54.228 CXX test/cpp_headers/dma.o 00:58:54.795 LINK spdk_nvme_perf 00:58:55.729 CXX test/cpp_headers/endian.o 00:58:57.639 CXX test/cpp_headers/env.o 00:58:59.010 CXX test/cpp_headers/env_dpdk.o 00:58:59.268 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:59:00.239 CXX test/cpp_headers/event.o 00:59:01.615 CXX test/cpp_headers/fd.o 00:59:02.989 CXX test/cpp_headers/fd_group.o 00:59:04.365 CXX test/cpp_headers/file.o 00:59:06.268 CXX test/cpp_headers/ftl.o 00:59:07.251 LINK iscsi_fuzz 00:59:07.818 CXX test/cpp_headers/gpt_spec.o 00:59:09.719 CXX test/cpp_headers/hexlify.o 00:59:09.719 CC app/spdk_nvme_identify/identify.o 00:59:11.095 CXX test/cpp_headers/histogram_data.o 00:59:13.039 CXX test/cpp_headers/idxd.o 00:59:14.445 CXX test/cpp_headers/idxd_spec.o 00:59:14.445 LINK spdk_nvme_identify 00:59:16.345 CXX test/cpp_headers/init.o 00:59:17.719 CXX test/cpp_headers/ioat.o 00:59:19.096 CXX test/cpp_headers/ioat_spec.o 00:59:20.470 CXX test/cpp_headers/iscsi_spec.o 00:59:21.845 CXX test/cpp_headers/json.o 00:59:22.780 CC app/spdk_nvme_discover/discovery_aer.o 00:59:24.168 CXX test/cpp_headers/jsonrpc.o 00:59:24.737 LINK spdk_nvme_discover 00:59:25.302 CXX test/cpp_headers/likely.o 00:59:27.202 CXX test/cpp_headers/log.o 00:59:28.575 CXX test/cpp_headers/lvol.o 00:59:30.471 CXX test/cpp_headers/memory.o 00:59:32.373 CXX test/cpp_headers/mmio.o 00:59:33.747 CXX test/cpp_headers/nbd.o 00:59:33.747 CXX test/cpp_headers/notify.o 00:59:35.646 CXX test/cpp_headers/nvme.o 00:59:35.646 CC examples/nvme/cmb_copy/cmb_copy.o 00:59:37.554 LINK cmb_copy 00:59:37.812 CXX test/cpp_headers/nvme_intel.o 00:59:39.715 CXX test/cpp_headers/nvme_ocssd.o 00:59:42.247 CXX test/cpp_headers/nvme_ocssd_spec.o 00:59:44.148 CXX test/cpp_headers/nvme_spec.o 00:59:46.050 CXX test/cpp_headers/nvme_zns.o 00:59:48.666 CXX test/cpp_headers/nvmf.o 00:59:50.567 CXX test/cpp_headers/nvmf_cmd.o 00:59:53.850 CXX test/cpp_headers/nvmf_fc_spec.o 00:59:55.818 CXX test/cpp_headers/nvmf_spec.o 00:59:58.348 CXX test/cpp_headers/nvmf_transport.o 01:00:00.880 CXX test/cpp_headers/opal.o 01:00:03.456 CXX test/cpp_headers/opal_spec.o 01:00:05.365 CXX test/cpp_headers/pci_ids.o 01:00:07.265 CXX test/cpp_headers/pipe.o 01:00:09.163 CXX test/cpp_headers/queue.o 01:00:09.163 CXX test/cpp_headers/reduce.o 01:00:11.062 CXX test/cpp_headers/rpc.o 01:00:13.593 CXX test/cpp_headers/scheduler.o 01:00:15.495 CXX test/cpp_headers/scsi.o 01:00:17.395 CXX test/cpp_headers/scsi_spec.o 01:00:19.295 CXX test/cpp_headers/sock.o 01:00:21.196 CXX test/cpp_headers/stdinc.o 01:00:22.569 CC examples/nvmf/nvmf/nvmf.o 01:00:22.569 CXX test/cpp_headers/string.o 01:00:24.475 CXX test/cpp_headers/thread.o 01:00:24.475 LINK nvmf 01:00:25.849 CXX test/cpp_headers/trace.o 01:00:27.748 CXX test/cpp_headers/trace_parser.o 01:00:29.202 CXX test/cpp_headers/tree.o 01:00:29.460 CXX test/cpp_headers/ublk.o 01:00:30.452 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 01:00:30.709 CXX test/cpp_headers/util.o 01:00:31.643 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 01:00:32.209 CXX test/cpp_headers/uuid.o 01:00:33.584 CXX test/cpp_headers/version.o 01:00:33.843 CXX test/cpp_headers/vfio_user_pci.o 01:00:34.101 LINK vhost_fuzz 01:00:35.057 CXX test/cpp_headers/vfio_user_spec.o 01:00:36.432 CXX test/cpp_headers/vhost.o 01:00:36.432 CC test/dma/test_dma/test_dma.o 01:00:37.841 CXX test/cpp_headers/vmd.o 01:00:38.409 CC app/spdk_top/spdk_top.o 01:00:38.668 LINK test_dma 01:00:38.926 CXX test/cpp_headers/xor.o 01:00:40.333 CXX test/cpp_headers/zipf.o 01:00:41.271 CC test/app/jsoncat/jsoncat.o 01:00:41.529 LINK spdk_top 01:00:42.094 LINK jsoncat 01:00:47.354 CC test/app/stub/stub.o 01:00:47.354 CC examples/nvme/abort/abort.o 01:00:48.735 LINK stub 01:00:49.668 LINK abort 01:00:52.202 CC examples/util/zipf/zipf.o 01:00:53.578 LINK zipf 01:00:54.145 CC examples/thread/thread/thread_ex.o 01:00:55.519 LINK thread 01:01:10.420 CC examples/idxd/perf/perf.o 01:01:10.420 LINK idxd_perf 01:01:12.962 CC app/vhost/vhost.o 01:01:14.335 LINK vhost 01:01:19.658 CC app/spdk_dd/spdk_dd.o 01:01:21.560 LINK spdk_dd 01:01:21.560 CC test/env/mem_callbacks/mem_callbacks.o 01:01:22.126 CC test/env/vtophys/vtophys.o 01:01:22.126 LINK vtophys 01:01:28.685 LINK mem_callbacks 01:02:00.743 CC examples/interrupt_tgt/interrupt_tgt.o 01:02:00.743 CC examples/nvme/pmr_persistence/pmr_persistence.o 01:02:00.743 LINK interrupt_tgt 01:02:00.743 LINK pmr_persistence 01:02:04.936 CC test/event/event_perf/event_perf.o 01:02:05.869 LINK event_perf 01:02:09.151 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 01:02:10.526 LINK env_dpdk_post_init 01:02:14.711 CC test/env/memory/memory_ut.o 01:02:24.762 LINK memory_ut 01:02:39.650 CC test/env/pci/pci_ut.o 01:02:41.551 LINK pci_ut 01:03:13.686 CC test/event/reactor/reactor.o 01:03:13.686 LINK reactor 01:03:20.241 CC app/fio/nvme/fio_plugin.o 01:03:24.422 LINK spdk_nvme 01:03:27.726 CC test/lvol/esnap/esnap.o 01:03:37.693 CC test/nvme/aer/aer.o 01:03:38.771 LINK aer 01:03:48.750 CC test/rpc_client/rpc_client_test.o 01:03:48.750 LINK rpc_client_test 01:03:49.008 LINK esnap 01:03:51.539 CC test/thread/poller_perf/poller_perf.o 01:03:52.927 LINK poller_perf 01:04:01.042 CC test/thread/lock/spdk_lock.o 01:04:06.304 CC test/event/reactor_perf/reactor_perf.o 01:04:07.692 LINK reactor_perf 01:04:08.626 LINK spdk_lock 01:04:08.885 CC app/fio/bdev/fio_plugin.o 01:04:12.167 LINK spdk_bdev 01:04:34.086 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 01:04:34.667 LINK histogram_ut 01:04:34.926 CC test/nvme/reset/reset.o 01:04:35.861 CC test/nvme/sgl/sgl.o 01:04:36.426 LINK reset 01:04:37.360 LINK sgl 01:04:38.746 CC test/nvme/e2edp/nvme_dp.o 01:04:38.746 CC test/unit/lib/accel/accel.c/accel_ut.o 01:04:39.680 LINK nvme_dp 01:04:42.211 CC test/nvme/overhead/overhead.o 01:04:42.211 CC test/nvme/err_injection/err_injection.o 01:04:43.148 LINK err_injection 01:04:43.148 LINK overhead 01:04:45.679 LINK accel_ut 01:04:57.876 CC test/event/app_repeat/app_repeat.o 01:04:57.876 LINK app_repeat 01:04:59.791 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 01:05:12.009 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 01:05:16.186 LINK blob_bdev_ut 01:05:28.381 LINK bdev_ut 01:05:29.310 CC test/unit/lib/blob/blob.c/blob_ut.o 01:05:47.381 CC test/nvme/startup/startup.o 01:05:47.381 CC test/nvme/reserve/reserve.o 01:05:47.381 LINK startup 01:05:47.381 LINK reserve 01:05:48.756 CC test/nvme/simple_copy/simple_copy.o 01:05:49.688 LINK simple_copy 01:05:52.308 CC test/nvme/connect_stress/connect_stress.o 01:05:52.875 LINK connect_stress 01:05:53.441 LINK blob_ut 01:05:53.699 CC test/nvme/boot_partition/boot_partition.o 01:05:54.669 LINK boot_partition 01:06:06.864 CC test/unit/lib/bdev/part.c/part_ut.o 01:06:12.142 CC test/event/scheduler/scheduler.o 01:06:14.039 LINK scheduler 01:06:23.997 LINK part_ut 01:06:25.371 CC test/unit/lib/blobfs/tree.c/tree_ut.o 01:06:26.749 LINK tree_ut 01:06:33.304 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 01:06:39.879 LINK blobfs_async_ut 01:06:42.415 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 01:06:43.359 LINK scsi_nvme_ut 01:06:46.704 CC test/nvme/compliance/nvme_compliance.o 01:06:48.078 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 01:06:48.335 LINK nvme_compliance 01:06:49.707 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 01:06:51.079 LINK gpt_ut 01:06:51.337 LINK blobfs_sync_ut 01:06:51.595 CC test/nvme/fused_ordering/fused_ordering.o 01:06:52.161 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 01:06:52.419 LINK fused_ordering 01:06:52.677 LINK blobfs_bdev_ut 01:06:55.959 CC test/nvme/doorbell_aers/doorbell_aers.o 01:06:56.525 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 01:06:56.784 LINK doorbell_aers 01:06:56.784 CC test/unit/lib/dma/dma.c/dma_ut.o 01:06:57.718 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 01:06:59.091 LINK dma_ut 01:07:00.989 LINK vbdev_lvol_ut 01:07:06.251 CC test/unit/lib/event/app.c/app_ut.o 01:07:08.158 LINK app_ut 01:07:11.464 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 01:07:11.464 LINK bdev_ut 01:07:13.362 LINK ioat_ut 01:07:13.362 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 01:07:18.667 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 01:07:21.192 LINK bdev_raid_sb_ut 01:07:21.192 CC test/unit/lib/event/reactor.c/reactor_ut.o 01:07:22.157 LINK bdev_raid_ut 01:07:24.682 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 01:07:25.248 LINK reactor_ut 01:07:26.621 LINK bdev_zone_ut 01:07:29.901 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 01:07:31.274 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 01:07:32.646 LINK concat_ut 01:07:34.569 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 01:07:35.501 LINK vbdev_zone_block_ut 01:07:36.900 LINK raid1_ut 01:07:41.129 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 01:07:43.655 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 01:07:44.221 CC test/nvme/fdp/fdp.o 01:07:44.787 LINK raid5f_ut 01:07:45.353 LINK fdp 01:07:45.919 CC test/nvme/cuse/cuse.o 01:07:46.202 CC test/unit/lib/iscsi/conn.c/conn_ut.o 01:07:46.460 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 01:07:48.992 LINK conn_ut 01:07:49.248 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 01:07:49.506 LINK cuse 01:07:49.506 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 01:07:50.071 LINK jsonrpc_server_ut 01:07:51.004 LINK init_grp_ut 01:07:51.004 LINK json_parse_ut 01:07:53.555 CC test/unit/lib/log/log.c/log_ut.o 01:07:53.555 LINK bdev_nvme_ut 01:07:54.486 LINK log_ut 01:07:56.387 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 01:07:58.286 CC test/unit/lib/iscsi/param.c/param_ut.o 01:07:58.286 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 01:07:58.543 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 01:07:59.917 LINK param_ut 01:08:00.481 LINK portal_grp_ut 01:08:00.739 LINK tgt_node_ut 01:08:01.303 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 01:08:01.560 LINK iscsi_ut 01:08:05.738 CC test/unit/lib/json/json_util.c/json_util_ut.o 01:08:07.108 LINK lvol_ut 01:08:07.108 LINK json_util_ut 01:08:08.478 CC test/unit/lib/notify/notify.c/notify_ut.o 01:08:08.736 CC test/unit/lib/json/json_write.c/json_write_ut.o 01:08:09.299 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 01:08:09.299 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 01:08:09.864 LINK notify_ut 01:08:11.760 LINK json_write_ut 01:08:13.674 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 01:08:14.240 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 01:08:14.502 LINK nvme_ut 01:08:14.760 CC test/unit/lib/scsi/dev.c/dev_ut.o 01:08:16.163 LINK dev_ut 01:08:16.421 LINK tcp_ut 01:08:17.355 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 01:08:17.355 LINK nvme_ctrlr_cmd_ut 01:08:18.728 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 01:08:19.293 LINK nvme_ctrlr_ut 01:08:19.552 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 01:08:19.552 CC test/unit/lib/scsi/lun.c/lun_ut.o 01:08:19.552 LINK nvme_ctrlr_ocssd_cmd_ut 01:08:20.118 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 01:08:20.685 LINK lun_ut 01:08:20.685 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 01:08:21.250 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 01:08:21.814 LINK scsi_ut 01:08:22.072 LINK nvme_ns_ut 01:08:22.072 LINK ctrlr_ut 01:08:22.329 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 01:08:22.895 LINK nvme_ns_cmd_ut 01:08:23.483 LINK scsi_bdev_ut 01:08:23.483 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 01:08:23.741 LINK nvme_ns_ocssd_cmd_ut 01:08:23.999 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 01:08:24.565 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 01:08:25.130 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 01:08:25.389 LINK scsi_pr_ut 01:08:25.969 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 01:08:26.535 LINK subsystem_ut 01:08:26.535 LINK ctrlr_discovery_ut 01:08:26.792 LINK nvme_pcie_ut 01:08:26.792 LINK ctrlr_bdev_ut 01:08:27.356 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 01:08:27.356 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 01:08:27.613 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 01:08:28.177 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 01:08:28.742 LINK nvme_poll_group_ut 01:08:29.308 CC test/unit/lib/nvmf/transport.c/transport_ut.o 01:08:29.565 LINK nvmf_ut 01:08:29.823 LINK nvme_qpair_ut 01:08:30.754 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 01:08:30.754 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 01:08:30.754 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 01:08:30.754 LINK rdma_ut 01:08:31.011 CC test/unit/lib/sock/sock.c/sock_ut.o 01:08:31.281 CC test/unit/lib/sock/posix.c/posix_ut.o 01:08:31.543 LINK nvme_quirks_ut 01:08:31.543 LINK transport_ut 01:08:32.477 LINK nvme_transport_ut 01:08:32.734 LINK posix_ut 01:08:32.992 LINK sock_ut 01:08:33.250 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 01:08:33.250 CC test/unit/lib/thread/thread.c/thread_ut.o 01:08:33.508 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 01:08:34.880 LINK iobuf_ut 01:08:35.445 LINK nvme_tcp_ut 01:08:36.011 LINK nvme_io_msg_ut 01:08:36.011 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 01:08:36.576 LINK thread_ut 01:08:37.530 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 01:08:38.905 LINK nvme_pcie_common_ut 01:08:38.905 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 01:08:39.163 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 01:08:39.163 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 01:08:39.421 CC test/unit/lib/util/base64.c/base64_ut.o 01:08:39.679 LINK base64_ut 01:08:39.936 LINK nvme_fabric_ut 01:08:39.936 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 01:08:39.936 LINK nvme_opal_ut 01:08:40.194 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 01:08:40.194 LINK pci_event_ut 01:08:40.451 LINK bit_array_ut 01:08:40.451 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 01:08:40.709 LINK nvme_cuse_ut 01:08:40.709 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 01:08:40.966 LINK subsystem_ut 01:08:40.966 LINK cpuset_ut 01:08:40.966 LINK nvme_rdma_ut 01:08:41.224 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 01:08:41.224 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 01:08:41.482 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 01:08:41.740 LINK rpc_ut 01:08:41.741 LINK idxd_user_ut 01:08:42.355 CC test/unit/lib/util/crc16.c/crc16_ut.o 01:08:42.355 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 01:08:42.613 LINK crc16_ut 01:08:42.613 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 01:08:42.613 LINK crc32_ieee_ut 01:08:42.613 LINK crc32c_ut 01:08:43.180 CC test/unit/lib/rdma/common.c/common_ut.o 01:08:43.180 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 01:08:43.180 LINK vhost_ut 01:08:43.180 CC test/unit/lib/util/crc64.c/crc64_ut.o 01:08:43.439 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 01:08:43.439 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 01:08:43.439 LINK crc64_ut 01:08:43.439 LINK ftl_l2p_ut 01:08:43.439 LINK common_ut 01:08:43.439 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 01:08:43.697 LINK ftl_bitmap_ut 01:08:43.955 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 01:08:43.955 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 01:08:43.955 LINK ftl_io_ut 01:08:44.213 CC test/unit/lib/util/dif.c/dif_ut.o 01:08:44.470 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 01:08:44.470 LINK ftl_mempool_ut 01:08:44.728 LINK idxd_ut 01:08:44.986 LINK ftl_band_ut 01:08:45.244 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 01:08:45.502 LINK ftl_mngt_ut 01:08:45.502 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 01:08:45.760 CC test/unit/lib/util/iov.c/iov_ut.o 01:08:45.760 LINK dif_ut 01:08:46.018 LINK iov_ut 01:08:46.584 CC test/unit/lib/util/math.c/math_ut.o 01:08:46.841 CC test/unit/lib/util/pipe.c/pipe_ut.o 01:08:46.841 LINK ftl_sb_ut 01:08:46.841 CC test/unit/lib/util/string.c/string_ut.o 01:08:46.841 LINK math_ut 01:08:47.099 LINK ftl_layout_upgrade_ut 01:08:47.099 LINK pipe_ut 01:08:47.356 LINK string_ut 01:08:47.940 CC test/unit/lib/util/xor.c/xor_ut.o 01:08:48.197 LINK xor_ut 01:10:24.685 16:47:19 -- spdk/autopackage.sh@44 -- $ make -j10 clean 01:10:24.685 make[1]: Nothing to be done for 'clean'. 01:10:24.685 16:47:23 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 01:10:24.685 16:47:23 -- common/autotest_common.sh@718 -- $ xtrace_disable 01:10:24.685 16:47:23 -- common/autotest_common.sh@10 -- $ set +x 01:10:24.685 16:47:23 -- spdk/autopackage.sh@48 -- $ timing_finish 01:10:24.685 16:47:23 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:10:24.685 16:47:23 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:10:24.685 16:47:23 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:10:24.685 + [[ -n 2366 ]] 01:10:24.685 + sudo kill 2366 01:10:24.695 [Pipeline] } 01:10:24.717 [Pipeline] // timeout 01:10:24.724 [Pipeline] } 01:10:24.739 [Pipeline] // stage 01:10:24.744 [Pipeline] } 01:10:24.761 [Pipeline] // catchError 01:10:24.770 [Pipeline] stage 01:10:24.772 [Pipeline] { (Stop VM) 01:10:24.785 [Pipeline] sh 01:10:25.062 + vagrant halt 01:10:29.269 ==> default: Halting domain... 01:10:34.543 [Pipeline] sh 01:10:34.825 + vagrant destroy -f 01:10:38.112 ==> default: Removing domain... 01:10:38.382 [Pipeline] sh 01:10:38.661 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest_2/output 01:10:38.718 [Pipeline] } 01:10:38.739 [Pipeline] // stage 01:10:38.747 [Pipeline] } 01:10:38.766 [Pipeline] // dir 01:10:38.772 [Pipeline] } 01:10:38.792 [Pipeline] // wrap 01:10:38.800 [Pipeline] } 01:10:38.819 [Pipeline] // catchError 01:10:38.831 [Pipeline] stage 01:10:38.833 [Pipeline] { (Epilogue) 01:10:38.851 [Pipeline] sh 01:10:39.145 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:11:01.083 [Pipeline] catchError 01:11:01.085 [Pipeline] { 01:11:01.098 [Pipeline] sh 01:11:01.378 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:11:01.636 Artifacts sizes are good 01:11:01.644 [Pipeline] } 01:11:01.662 [Pipeline] // catchError 01:11:01.674 [Pipeline] archiveArtifacts 01:11:01.680 Archiving artifacts 01:11:02.074 [Pipeline] cleanWs 01:11:02.084 [WS-CLEANUP] Deleting project workspace... 01:11:02.084 [WS-CLEANUP] Deferred wipeout is used... 01:11:02.090 [WS-CLEANUP] done 01:11:02.092 [Pipeline] } 01:11:02.108 [Pipeline] // stage 01:11:02.113 [Pipeline] } 01:11:02.129 [Pipeline] // node 01:11:02.135 [Pipeline] End of Pipeline 01:11:02.168 Finished: SUCCESS